00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 167 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3668 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.045 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.082 Fetching changes from the remote Git repository 00:00:00.085 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.123 > git --version # 'git version 2.39.2' 00:00:00.123 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.149 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.149 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.480 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.490 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.502 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.502 > git config core.sparsecheckout # timeout=10 00:00:03.512 > git read-tree -mu HEAD # timeout=10 00:00:03.528 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.552 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.552 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.657 [Pipeline] Start of Pipeline 00:00:03.670 [Pipeline] library 00:00:03.672 Loading library shm_lib@master 00:00:03.672 Library shm_lib@master is cached. Copying from home. 00:00:03.690 [Pipeline] node 00:00:03.707 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.709 [Pipeline] { 00:00:03.718 [Pipeline] catchError 00:00:03.719 [Pipeline] { 00:00:03.729 [Pipeline] wrap 00:00:03.740 [Pipeline] { 00:00:03.748 [Pipeline] stage 00:00:03.750 [Pipeline] { (Prologue) 00:00:03.770 [Pipeline] echo 00:00:03.771 Node: VM-host-WFP7 00:00:03.780 [Pipeline] cleanWs 00:00:03.790 [WS-CLEANUP] Deleting project workspace... 00:00:03.790 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.798 [WS-CLEANUP] done 00:00:03.988 [Pipeline] setCustomBuildProperty 00:00:04.081 [Pipeline] httpRequest 00:00:04.408 [Pipeline] echo 00:00:04.410 Sorcerer 10.211.164.20 is alive 00:00:04.419 [Pipeline] retry 00:00:04.422 [Pipeline] { 00:00:04.437 [Pipeline] httpRequest 00:00:04.441 HttpMethod: GET 00:00:04.442 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.443 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.443 Response Code: HTTP/1.1 200 OK 00:00:04.444 Success: Status code 200 is in the accepted range: 200,404 00:00:04.445 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.590 [Pipeline] } 00:00:04.605 [Pipeline] // retry 00:00:04.611 [Pipeline] sh 00:00:04.897 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.912 [Pipeline] httpRequest 00:00:05.565 [Pipeline] echo 00:00:05.567 Sorcerer 10.211.164.20 is alive 00:00:05.575 [Pipeline] retry 00:00:05.578 [Pipeline] { 00:00:05.589 [Pipeline] httpRequest 00:00:05.593 HttpMethod: GET 00:00:05.594 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:05.594 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:05.595 Response Code: HTTP/1.1 200 OK 00:00:05.596 Success: Status code 200 is in the accepted range: 200,404 00:00:05.596 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:26.167 [Pipeline] } 00:00:26.185 [Pipeline] // retry 00:00:26.193 [Pipeline] sh 00:00:26.477 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:29.029 [Pipeline] sh 00:00:29.314 + git -C spdk log --oneline -n5 00:00:29.314 b18e1bd62 version: v24.09.1-pre 00:00:29.314 19524ad45 version: v24.09 00:00:29.314 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:29.314 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:29.314 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:29.335 [Pipeline] withCredentials 00:00:29.347 > git --version # timeout=10 00:00:29.360 > git --version # 'git version 2.39.2' 00:00:29.378 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:29.380 [Pipeline] { 00:00:29.390 [Pipeline] retry 00:00:29.392 [Pipeline] { 00:00:29.408 [Pipeline] sh 00:00:29.692 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:29.973 [Pipeline] } 00:00:30.005 [Pipeline] // retry 00:00:30.008 [Pipeline] } 00:00:30.018 [Pipeline] // withCredentials 00:00:30.025 [Pipeline] httpRequest 00:00:30.415 [Pipeline] echo 00:00:30.417 Sorcerer 10.211.164.20 is alive 00:00:30.426 [Pipeline] retry 00:00:30.428 [Pipeline] { 00:00:30.441 [Pipeline] httpRequest 00:00:30.446 HttpMethod: GET 00:00:30.447 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:30.447 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:30.461 Response Code: HTTP/1.1 200 OK 00:00:30.461 Success: Status code 200 is in the accepted range: 200,404 00:00:30.462 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:58.621 [Pipeline] } 00:00:58.640 [Pipeline] // retry 00:00:58.648 [Pipeline] sh 00:00:58.934 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:00.328 [Pipeline] sh 00:01:00.612 + git -C dpdk log --oneline -n5 00:01:00.613 eeb0605f11 version: 23.11.0 00:01:00.613 238778122a doc: update release notes for 23.11 00:01:00.613 46aa6b3cfc doc: fix description of RSS features 00:01:00.613 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:00.613 7e421ae345 devtools: support skipping forbid rule check 00:01:00.632 [Pipeline] writeFile 00:01:00.648 [Pipeline] sh 00:01:00.933 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:00.946 [Pipeline] sh 00:01:01.230 + cat autorun-spdk.conf 00:01:01.230 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.230 SPDK_RUN_ASAN=1 00:01:01.230 SPDK_RUN_UBSAN=1 00:01:01.230 SPDK_TEST_RAID=1 00:01:01.230 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:01.230 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:01.230 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.238 RUN_NIGHTLY=1 00:01:01.240 [Pipeline] } 00:01:01.255 [Pipeline] // stage 00:01:01.271 [Pipeline] stage 00:01:01.273 [Pipeline] { (Run VM) 00:01:01.287 [Pipeline] sh 00:01:01.571 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:01.571 + echo 'Start stage prepare_nvme.sh' 00:01:01.571 Start stage prepare_nvme.sh 00:01:01.571 + [[ -n 1 ]] 00:01:01.571 + disk_prefix=ex1 00:01:01.571 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:01.571 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:01.571 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:01.571 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.571 ++ SPDK_RUN_ASAN=1 00:01:01.571 ++ SPDK_RUN_UBSAN=1 00:01:01.571 ++ SPDK_TEST_RAID=1 00:01:01.571 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:01.571 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:01.571 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.571 ++ RUN_NIGHTLY=1 00:01:01.571 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:01.571 + nvme_files=() 00:01:01.571 + declare -A nvme_files 00:01:01.571 + backend_dir=/var/lib/libvirt/images/backends 00:01:01.571 + nvme_files['nvme.img']=5G 00:01:01.571 + nvme_files['nvme-cmb.img']=5G 00:01:01.571 + nvme_files['nvme-multi0.img']=4G 00:01:01.572 + nvme_files['nvme-multi1.img']=4G 00:01:01.572 + nvme_files['nvme-multi2.img']=4G 00:01:01.572 + nvme_files['nvme-openstack.img']=8G 00:01:01.572 + nvme_files['nvme-zns.img']=5G 00:01:01.572 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:01.572 + (( SPDK_TEST_FTL == 1 )) 00:01:01.572 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:01.572 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:01.572 + for nvme in "${!nvme_files[@]}" 00:01:01.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:01.572 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.572 + for nvme in "${!nvme_files[@]}" 00:01:01.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:01.572 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.572 + for nvme in "${!nvme_files[@]}" 00:01:01.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:01.572 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:01.572 + for nvme in "${!nvme_files[@]}" 00:01:01.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:01.572 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.572 + for nvme in "${!nvme_files[@]}" 00:01:01.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:01.572 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.572 + for nvme in "${!nvme_files[@]}" 00:01:01.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:01.572 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.831 + for nvme in "${!nvme_files[@]}" 00:01:01.831 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:01.831 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.831 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:01.831 + echo 'End stage prepare_nvme.sh' 00:01:01.831 End stage prepare_nvme.sh 00:01:01.843 [Pipeline] sh 00:01:02.121 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:02.121 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:02.121 00:01:02.121 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:02.121 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:02.121 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:02.121 HELP=0 00:01:02.121 DRY_RUN=0 00:01:02.121 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:02.121 NVME_DISKS_TYPE=nvme,nvme, 00:01:02.121 NVME_AUTO_CREATE=0 00:01:02.121 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:02.121 NVME_CMB=,, 00:01:02.121 NVME_PMR=,, 00:01:02.121 NVME_ZNS=,, 00:01:02.121 NVME_MS=,, 00:01:02.121 NVME_FDP=,, 00:01:02.121 SPDK_VAGRANT_DISTRO=fedora39 00:01:02.121 SPDK_VAGRANT_VMCPU=10 00:01:02.121 SPDK_VAGRANT_VMRAM=12288 00:01:02.121 SPDK_VAGRANT_PROVIDER=libvirt 00:01:02.121 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:02.121 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:02.121 SPDK_OPENSTACK_NETWORK=0 00:01:02.121 VAGRANT_PACKAGE_BOX=0 00:01:02.121 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:02.121 FORCE_DISTRO=true 00:01:02.121 VAGRANT_BOX_VERSION= 00:01:02.121 EXTRA_VAGRANTFILES= 00:01:02.121 NIC_MODEL=virtio 00:01:02.121 00:01:02.121 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:02.121 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:04.024 Bringing machine 'default' up with 'libvirt' provider... 00:01:04.593 ==> default: Creating image (snapshot of base box volume). 00:01:04.593 ==> default: Creating domain with the following settings... 00:01:04.593 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732625061_ff4de889745424578b25 00:01:04.593 ==> default: -- Domain type: kvm 00:01:04.593 ==> default: -- Cpus: 10 00:01:04.593 ==> default: -- Feature: acpi 00:01:04.593 ==> default: -- Feature: apic 00:01:04.593 ==> default: -- Feature: pae 00:01:04.593 ==> default: -- Memory: 12288M 00:01:04.593 ==> default: -- Memory Backing: hugepages: 00:01:04.593 ==> default: -- Management MAC: 00:01:04.593 ==> default: -- Loader: 00:01:04.593 ==> default: -- Nvram: 00:01:04.593 ==> default: -- Base box: spdk/fedora39 00:01:04.593 ==> default: -- Storage pool: default 00:01:04.593 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732625061_ff4de889745424578b25.img (20G) 00:01:04.593 ==> default: -- Volume Cache: default 00:01:04.593 ==> default: -- Kernel: 00:01:04.593 ==> default: -- Initrd: 00:01:04.593 ==> default: -- Graphics Type: vnc 00:01:04.593 ==> default: -- Graphics Port: -1 00:01:04.593 ==> default: -- Graphics IP: 127.0.0.1 00:01:04.593 ==> default: -- Graphics Password: Not defined 00:01:04.593 ==> default: -- Video Type: cirrus 00:01:04.593 ==> default: -- Video VRAM: 9216 00:01:04.593 ==> default: -- Sound Type: 00:01:04.593 ==> default: -- Keymap: en-us 00:01:04.593 ==> default: -- TPM Path: 00:01:04.593 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:04.593 ==> default: -- Command line args: 00:01:04.593 ==> default: -> value=-device, 00:01:04.593 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:04.593 ==> default: -> value=-drive, 00:01:04.593 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:04.593 ==> default: -> value=-device, 00:01:04.593 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.593 ==> default: -> value=-device, 00:01:04.593 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:04.593 ==> default: -> value=-drive, 00:01:04.593 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:04.593 ==> default: -> value=-device, 00:01:04.593 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.593 ==> default: -> value=-drive, 00:01:04.593 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:04.593 ==> default: -> value=-device, 00:01:04.593 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.593 ==> default: -> value=-drive, 00:01:04.593 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:04.593 ==> default: -> value=-device, 00:01:04.593 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.853 ==> default: Creating shared folders metadata... 00:01:04.853 ==> default: Starting domain. 00:01:06.259 ==> default: Waiting for domain to get an IP address... 00:01:24.371 ==> default: Waiting for SSH to become available... 00:01:24.371 ==> default: Configuring and enabling network interfaces... 00:01:29.659 default: SSH address: 192.168.121.148:22 00:01:29.659 default: SSH username: vagrant 00:01:29.659 default: SSH auth method: private key 00:01:32.204 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:40.339 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:46.914 ==> default: Mounting SSHFS shared folder... 00:01:48.300 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:48.300 ==> default: Checking Mount.. 00:01:50.293 ==> default: Folder Successfully Mounted! 00:01:50.293 ==> default: Running provisioner: file... 00:01:51.234 default: ~/.gitconfig => .gitconfig 00:01:51.495 00:01:51.495 SUCCESS! 00:01:51.495 00:01:51.495 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:51.495 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:51.495 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:51.495 00:01:51.765 [Pipeline] } 00:01:51.779 [Pipeline] // stage 00:01:51.789 [Pipeline] dir 00:01:51.790 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:51.791 [Pipeline] { 00:01:51.801 [Pipeline] catchError 00:01:51.802 [Pipeline] { 00:01:51.812 [Pipeline] sh 00:01:52.094 + vagrant ssh-config --host vagrant 00:01:52.094 + sed -ne /^Host/,$p 00:01:52.094 + tee ssh_conf 00:01:54.635 Host vagrant 00:01:54.635 HostName 192.168.121.148 00:01:54.635 User vagrant 00:01:54.635 Port 22 00:01:54.635 UserKnownHostsFile /dev/null 00:01:54.635 StrictHostKeyChecking no 00:01:54.635 PasswordAuthentication no 00:01:54.635 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:54.635 IdentitiesOnly yes 00:01:54.635 LogLevel FATAL 00:01:54.635 ForwardAgent yes 00:01:54.635 ForwardX11 yes 00:01:54.635 00:01:54.650 [Pipeline] withEnv 00:01:54.652 [Pipeline] { 00:01:54.665 [Pipeline] sh 00:01:54.948 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:54.948 source /etc/os-release 00:01:54.948 [[ -e /image.version ]] && img=$(< /image.version) 00:01:54.948 # Minimal, systemd-like check. 00:01:54.948 if [[ -e /.dockerenv ]]; then 00:01:54.948 # Clear garbage from the node's name: 00:01:54.948 # agt-er_autotest_547-896 -> autotest_547-896 00:01:54.948 # $HOSTNAME is the actual container id 00:01:54.948 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:54.948 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:54.948 # We can assume this is a mount from a host where container is running, 00:01:54.948 # so fetch its hostname to easily identify the target swarm worker. 00:01:54.948 container="$(< /etc/hostname) ($agent)" 00:01:54.948 else 00:01:54.948 # Fallback 00:01:54.948 container=$agent 00:01:54.948 fi 00:01:54.948 fi 00:01:54.948 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:54.948 00:01:55.221 [Pipeline] } 00:01:55.239 [Pipeline] // withEnv 00:01:55.249 [Pipeline] setCustomBuildProperty 00:01:55.266 [Pipeline] stage 00:01:55.268 [Pipeline] { (Tests) 00:01:55.286 [Pipeline] sh 00:01:55.574 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:55.849 [Pipeline] sh 00:01:56.136 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:56.411 [Pipeline] timeout 00:01:56.411 Timeout set to expire in 1 hr 30 min 00:01:56.413 [Pipeline] { 00:01:56.427 [Pipeline] sh 00:01:56.710 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:57.280 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:01:57.294 [Pipeline] sh 00:01:57.584 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:57.859 [Pipeline] sh 00:01:58.141 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:58.419 [Pipeline] sh 00:01:58.703 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:58.974 ++ readlink -f spdk_repo 00:01:58.975 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:58.975 + [[ -n /home/vagrant/spdk_repo ]] 00:01:58.975 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:58.975 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:58.975 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:58.975 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:58.975 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:58.975 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:58.975 + cd /home/vagrant/spdk_repo 00:01:58.975 + source /etc/os-release 00:01:58.975 ++ NAME='Fedora Linux' 00:01:58.975 ++ VERSION='39 (Cloud Edition)' 00:01:58.975 ++ ID=fedora 00:01:58.975 ++ VERSION_ID=39 00:01:58.975 ++ VERSION_CODENAME= 00:01:58.975 ++ PLATFORM_ID=platform:f39 00:01:58.975 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:58.975 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:58.975 ++ LOGO=fedora-logo-icon 00:01:58.975 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:58.975 ++ HOME_URL=https://fedoraproject.org/ 00:01:58.975 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:58.975 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:58.975 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:58.975 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:58.975 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:58.975 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:58.975 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:58.975 ++ SUPPORT_END=2024-11-12 00:01:58.975 ++ VARIANT='Cloud Edition' 00:01:58.975 ++ VARIANT_ID=cloud 00:01:58.975 + uname -a 00:01:58.975 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:58.975 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:59.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:59.560 Hugepages 00:01:59.560 node hugesize free / total 00:01:59.560 node0 1048576kB 0 / 0 00:01:59.560 node0 2048kB 0 / 0 00:01:59.560 00:01:59.560 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:59.560 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:59.560 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:59.560 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:59.560 + rm -f /tmp/spdk-ld-path 00:01:59.560 + source autorun-spdk.conf 00:01:59.560 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.560 ++ SPDK_RUN_ASAN=1 00:01:59.560 ++ SPDK_RUN_UBSAN=1 00:01:59.560 ++ SPDK_TEST_RAID=1 00:01:59.560 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:59.560 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:59.560 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.560 ++ RUN_NIGHTLY=1 00:01:59.560 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:59.560 + [[ -n '' ]] 00:01:59.560 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:59.821 + for M in /var/spdk/build-*-manifest.txt 00:01:59.821 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:59.821 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:59.821 + for M in /var/spdk/build-*-manifest.txt 00:01:59.821 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:59.821 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:59.821 + for M in /var/spdk/build-*-manifest.txt 00:01:59.821 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:59.821 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:59.821 ++ uname 00:01:59.821 + [[ Linux == \L\i\n\u\x ]] 00:01:59.821 + sudo dmesg -T 00:01:59.821 + sudo dmesg --clear 00:01:59.821 + dmesg_pid=6168 00:01:59.821 + [[ Fedora Linux == FreeBSD ]] 00:01:59.821 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:59.821 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:59.821 + sudo dmesg -Tw 00:01:59.821 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:59.821 + [[ -x /usr/src/fio-static/fio ]] 00:01:59.821 + export FIO_BIN=/usr/src/fio-static/fio 00:01:59.821 + FIO_BIN=/usr/src/fio-static/fio 00:01:59.821 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:59.821 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:59.821 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:59.821 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:59.821 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:59.821 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:59.821 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:59.821 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:59.821 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:59.821 Test configuration: 00:01:59.821 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.821 SPDK_RUN_ASAN=1 00:01:59.821 SPDK_RUN_UBSAN=1 00:01:59.821 SPDK_TEST_RAID=1 00:01:59.821 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:59.821 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:59.821 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.080 RUN_NIGHTLY=1 12:45:17 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:00.080 12:45:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:00.080 12:45:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:00.080 12:45:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:00.080 12:45:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:00.080 12:45:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:00.080 12:45:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.080 12:45:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.080 12:45:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.080 12:45:17 -- paths/export.sh@5 -- $ export PATH 00:02:00.081 12:45:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.081 12:45:17 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:00.081 12:45:17 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:00.081 12:45:17 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732625117.XXXXXX 00:02:00.081 12:45:17 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732625117.jMC2dl 00:02:00.081 12:45:17 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:00.081 12:45:17 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:00.081 12:45:17 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:00.081 12:45:17 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:00.081 12:45:17 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:00.081 12:45:17 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:00.081 12:45:17 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:00.081 12:45:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:00.081 12:45:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.081 12:45:17 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:00.081 12:45:17 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:00.081 12:45:17 -- pm/common@17 -- $ local monitor 00:02:00.081 12:45:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.081 12:45:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.081 12:45:17 -- pm/common@25 -- $ sleep 1 00:02:00.081 12:45:17 -- pm/common@21 -- $ date +%s 00:02:00.081 12:45:17 -- pm/common@21 -- $ date +%s 00:02:00.081 12:45:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732625117 00:02:00.081 12:45:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732625117 00:02:00.081 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732625117_collect-vmstat.pm.log 00:02:00.081 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732625117_collect-cpu-load.pm.log 00:02:01.020 12:45:18 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:01.020 12:45:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:01.020 12:45:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:01.020 12:45:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:01.020 12:45:18 -- spdk/autobuild.sh@16 -- $ date -u 00:02:01.020 Tue Nov 26 12:45:18 PM UTC 2024 00:02:01.020 12:45:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:01.020 v24.09-1-gb18e1bd62 00:02:01.020 12:45:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:01.020 12:45:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:01.020 12:45:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:01.020 12:45:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:01.020 12:45:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.020 ************************************ 00:02:01.020 START TEST asan 00:02:01.020 ************************************ 00:02:01.020 using asan 00:02:01.020 12:45:18 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:01.020 00:02:01.020 real 0m0.001s 00:02:01.020 user 0m0.001s 00:02:01.020 sys 0m0.000s 00:02:01.020 12:45:18 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:01.020 12:45:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:01.020 ************************************ 00:02:01.020 END TEST asan 00:02:01.020 ************************************ 00:02:01.281 12:45:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:01.281 12:45:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:01.281 12:45:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:01.281 12:45:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:01.281 12:45:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.281 ************************************ 00:02:01.281 START TEST ubsan 00:02:01.281 ************************************ 00:02:01.281 using ubsan 00:02:01.281 12:45:18 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:01.281 00:02:01.281 real 0m0.000s 00:02:01.281 user 0m0.000s 00:02:01.281 sys 0m0.000s 00:02:01.281 12:45:18 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:01.281 12:45:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:01.281 ************************************ 00:02:01.281 END TEST ubsan 00:02:01.281 ************************************ 00:02:01.281 12:45:18 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:01.281 12:45:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:01.281 12:45:18 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:01.281 12:45:18 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:01.281 12:45:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:01.281 12:45:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.281 ************************************ 00:02:01.281 START TEST build_native_dpdk 00:02:01.281 ************************************ 00:02:01.281 12:45:18 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:01.281 12:45:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:01.282 eeb0605f11 version: 23.11.0 00:02:01.282 238778122a doc: update release notes for 23.11 00:02:01.282 46aa6b3cfc doc: fix description of RSS features 00:02:01.282 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:01.282 7e421ae345 devtools: support skipping forbid rule check 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:01.282 patching file config/rte_config.h 00:02:01.282 Hunk #1 succeeded at 60 (offset 1 line). 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:01.282 patching file lib/pcapng/rte_pcapng.c 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:01.282 12:45:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:01.282 12:45:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:07.863 The Meson build system 00:02:07.863 Version: 1.5.0 00:02:07.863 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:07.863 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:07.863 Build type: native build 00:02:07.863 Program cat found: YES (/usr/bin/cat) 00:02:07.863 Project name: DPDK 00:02:07.863 Project version: 23.11.0 00:02:07.863 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:07.863 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:07.863 Host machine cpu family: x86_64 00:02:07.863 Host machine cpu: x86_64 00:02:07.863 Message: ## Building in Developer Mode ## 00:02:07.863 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.863 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:07.863 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.863 Program python3 found: YES (/usr/bin/python3) 00:02:07.863 Program cat found: YES (/usr/bin/cat) 00:02:07.863 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:07.863 Compiler for C supports arguments -march=native: YES 00:02:07.863 Checking for size of "void *" : 8 00:02:07.863 Checking for size of "void *" : 8 (cached) 00:02:07.863 Library m found: YES 00:02:07.863 Library numa found: YES 00:02:07.863 Has header "numaif.h" : YES 00:02:07.863 Library fdt found: NO 00:02:07.863 Library execinfo found: NO 00:02:07.863 Has header "execinfo.h" : YES 00:02:07.863 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:07.863 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.863 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.863 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.863 Run-time dependency openssl found: YES 3.1.1 00:02:07.863 Run-time dependency libpcap found: YES 1.10.4 00:02:07.863 Has header "pcap.h" with dependency libpcap: YES 00:02:07.863 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.863 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.863 Compiler for C supports arguments -Wformat: YES 00:02:07.863 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.863 Compiler for C supports arguments -Wformat-security: NO 00:02:07.863 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.863 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.863 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.863 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.863 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.863 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.863 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.863 Compiler for C supports arguments -Wundef: YES 00:02:07.863 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.863 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.863 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.863 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.863 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.863 Program objdump found: YES (/usr/bin/objdump) 00:02:07.863 Compiler for C supports arguments -mavx512f: YES 00:02:07.863 Checking if "AVX512 checking" compiles: YES 00:02:07.863 Fetching value of define "__SSE4_2__" : 1 00:02:07.863 Fetching value of define "__AES__" : 1 00:02:07.863 Fetching value of define "__AVX__" : 1 00:02:07.863 Fetching value of define "__AVX2__" : 1 00:02:07.863 Fetching value of define "__AVX512BW__" : 1 00:02:07.863 Fetching value of define "__AVX512CD__" : 1 00:02:07.863 Fetching value of define "__AVX512DQ__" : 1 00:02:07.863 Fetching value of define "__AVX512F__" : 1 00:02:07.863 Fetching value of define "__AVX512VL__" : 1 00:02:07.863 Fetching value of define "__PCLMUL__" : 1 00:02:07.863 Fetching value of define "__RDRND__" : 1 00:02:07.863 Fetching value of define "__RDSEED__" : 1 00:02:07.863 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.863 Fetching value of define "__znver1__" : (undefined) 00:02:07.863 Fetching value of define "__znver2__" : (undefined) 00:02:07.863 Fetching value of define "__znver3__" : (undefined) 00:02:07.863 Fetching value of define "__znver4__" : (undefined) 00:02:07.863 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.863 Message: lib/log: Defining dependency "log" 00:02:07.863 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.863 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.863 Checking for function "getentropy" : NO 00:02:07.863 Message: lib/eal: Defining dependency "eal" 00:02:07.863 Message: lib/ring: Defining dependency "ring" 00:02:07.863 Message: lib/rcu: Defining dependency "rcu" 00:02:07.863 Message: lib/mempool: Defining dependency "mempool" 00:02:07.863 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.863 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.863 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.863 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.863 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:07.863 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:07.863 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:07.863 Compiler for C supports arguments -mpclmul: YES 00:02:07.863 Compiler for C supports arguments -maes: YES 00:02:07.863 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.863 Compiler for C supports arguments -mavx512bw: YES 00:02:07.863 Compiler for C supports arguments -mavx512dq: YES 00:02:07.863 Compiler for C supports arguments -mavx512vl: YES 00:02:07.863 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.863 Compiler for C supports arguments -mavx2: YES 00:02:07.863 Compiler for C supports arguments -mavx: YES 00:02:07.863 Message: lib/net: Defining dependency "net" 00:02:07.863 Message: lib/meter: Defining dependency "meter" 00:02:07.863 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.863 Message: lib/pci: Defining dependency "pci" 00:02:07.863 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.863 Message: lib/metrics: Defining dependency "metrics" 00:02:07.863 Message: lib/hash: Defining dependency "hash" 00:02:07.863 Message: lib/timer: Defining dependency "timer" 00:02:07.863 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.863 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:07.863 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:07.863 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.863 Message: lib/acl: Defining dependency "acl" 00:02:07.863 Message: lib/bbdev: Defining dependency "bbdev" 00:02:07.863 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:07.863 Run-time dependency libelf found: YES 0.191 00:02:07.863 Message: lib/bpf: Defining dependency "bpf" 00:02:07.863 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:07.863 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.863 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.863 Message: lib/distributor: Defining dependency "distributor" 00:02:07.863 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.863 Message: lib/efd: Defining dependency "efd" 00:02:07.863 Message: lib/eventdev: Defining dependency "eventdev" 00:02:07.863 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:07.863 Message: lib/gpudev: Defining dependency "gpudev" 00:02:07.863 Message: lib/gro: Defining dependency "gro" 00:02:07.863 Message: lib/gso: Defining dependency "gso" 00:02:07.863 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:07.863 Message: lib/jobstats: Defining dependency "jobstats" 00:02:07.863 Message: lib/latencystats: Defining dependency "latencystats" 00:02:07.863 Message: lib/lpm: Defining dependency "lpm" 00:02:07.863 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.863 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:07.864 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:07.864 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:07.864 Message: lib/member: Defining dependency "member" 00:02:07.864 Message: lib/pcapng: Defining dependency "pcapng" 00:02:07.864 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.864 Message: lib/power: Defining dependency "power" 00:02:07.864 Message: lib/rawdev: Defining dependency "rawdev" 00:02:07.864 Message: lib/regexdev: Defining dependency "regexdev" 00:02:07.864 Message: lib/mldev: Defining dependency "mldev" 00:02:07.864 Message: lib/rib: Defining dependency "rib" 00:02:07.864 Message: lib/reorder: Defining dependency "reorder" 00:02:07.864 Message: lib/sched: Defining dependency "sched" 00:02:07.864 Message: lib/security: Defining dependency "security" 00:02:07.864 Message: lib/stack: Defining dependency "stack" 00:02:07.864 Has header "linux/userfaultfd.h" : YES 00:02:07.864 Has header "linux/vduse.h" : YES 00:02:07.864 Message: lib/vhost: Defining dependency "vhost" 00:02:07.864 Message: lib/ipsec: Defining dependency "ipsec" 00:02:07.864 Message: lib/pdcp: Defining dependency "pdcp" 00:02:07.864 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.864 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:07.864 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.864 Message: lib/fib: Defining dependency "fib" 00:02:07.864 Message: lib/port: Defining dependency "port" 00:02:07.864 Message: lib/pdump: Defining dependency "pdump" 00:02:07.864 Message: lib/table: Defining dependency "table" 00:02:07.864 Message: lib/pipeline: Defining dependency "pipeline" 00:02:07.864 Message: lib/graph: Defining dependency "graph" 00:02:07.864 Message: lib/node: Defining dependency "node" 00:02:07.864 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.864 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.864 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.248 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.248 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:09.248 Compiler for C supports arguments -Wno-unused-value: YES 00:02:09.248 Compiler for C supports arguments -Wno-format: YES 00:02:09.248 Compiler for C supports arguments -Wno-format-security: YES 00:02:09.248 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:09.248 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:09.248 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:09.248 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:09.248 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:09.248 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:09.248 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.248 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:09.248 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:09.248 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:09.248 Has header "sys/epoll.h" : YES 00:02:09.248 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:09.248 Configuring doxy-api-html.conf using configuration 00:02:09.248 Configuring doxy-api-man.conf using configuration 00:02:09.248 Program mandb found: YES (/usr/bin/mandb) 00:02:09.248 Program sphinx-build found: NO 00:02:09.248 Configuring rte_build_config.h using configuration 00:02:09.248 Message: 00:02:09.248 ================= 00:02:09.248 Applications Enabled 00:02:09.248 ================= 00:02:09.248 00:02:09.248 apps: 00:02:09.248 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:09.248 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:09.248 test-pmd, test-regex, test-sad, test-security-perf, 00:02:09.248 00:02:09.248 Message: 00:02:09.248 ================= 00:02:09.248 Libraries Enabled 00:02:09.248 ================= 00:02:09.248 00:02:09.248 libs: 00:02:09.248 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:09.248 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:09.248 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:09.248 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:09.248 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:09.248 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:09.248 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:09.248 00:02:09.248 00:02:09.248 Message: 00:02:09.248 =============== 00:02:09.248 Drivers Enabled 00:02:09.248 =============== 00:02:09.248 00:02:09.248 common: 00:02:09.248 00:02:09.248 bus: 00:02:09.248 pci, vdev, 00:02:09.248 mempool: 00:02:09.248 ring, 00:02:09.248 dma: 00:02:09.248 00:02:09.248 net: 00:02:09.248 i40e, 00:02:09.248 raw: 00:02:09.248 00:02:09.248 crypto: 00:02:09.248 00:02:09.248 compress: 00:02:09.248 00:02:09.248 regex: 00:02:09.248 00:02:09.248 ml: 00:02:09.248 00:02:09.248 vdpa: 00:02:09.248 00:02:09.248 event: 00:02:09.248 00:02:09.248 baseband: 00:02:09.248 00:02:09.248 gpu: 00:02:09.248 00:02:09.248 00:02:09.248 Message: 00:02:09.248 ================= 00:02:09.248 Content Skipped 00:02:09.248 ================= 00:02:09.248 00:02:09.248 apps: 00:02:09.248 00:02:09.248 libs: 00:02:09.248 00:02:09.248 drivers: 00:02:09.248 common/cpt: not in enabled drivers build config 00:02:09.248 common/dpaax: not in enabled drivers build config 00:02:09.248 common/iavf: not in enabled drivers build config 00:02:09.248 common/idpf: not in enabled drivers build config 00:02:09.248 common/mvep: not in enabled drivers build config 00:02:09.248 common/octeontx: not in enabled drivers build config 00:02:09.248 bus/auxiliary: not in enabled drivers build config 00:02:09.248 bus/cdx: not in enabled drivers build config 00:02:09.248 bus/dpaa: not in enabled drivers build config 00:02:09.248 bus/fslmc: not in enabled drivers build config 00:02:09.248 bus/ifpga: not in enabled drivers build config 00:02:09.248 bus/platform: not in enabled drivers build config 00:02:09.248 bus/vmbus: not in enabled drivers build config 00:02:09.248 common/cnxk: not in enabled drivers build config 00:02:09.248 common/mlx5: not in enabled drivers build config 00:02:09.248 common/nfp: not in enabled drivers build config 00:02:09.248 common/qat: not in enabled drivers build config 00:02:09.248 common/sfc_efx: not in enabled drivers build config 00:02:09.248 mempool/bucket: not in enabled drivers build config 00:02:09.248 mempool/cnxk: not in enabled drivers build config 00:02:09.248 mempool/dpaa: not in enabled drivers build config 00:02:09.248 mempool/dpaa2: not in enabled drivers build config 00:02:09.248 mempool/octeontx: not in enabled drivers build config 00:02:09.248 mempool/stack: not in enabled drivers build config 00:02:09.248 dma/cnxk: not in enabled drivers build config 00:02:09.248 dma/dpaa: not in enabled drivers build config 00:02:09.248 dma/dpaa2: not in enabled drivers build config 00:02:09.248 dma/hisilicon: not in enabled drivers build config 00:02:09.248 dma/idxd: not in enabled drivers build config 00:02:09.248 dma/ioat: not in enabled drivers build config 00:02:09.248 dma/skeleton: not in enabled drivers build config 00:02:09.248 net/af_packet: not in enabled drivers build config 00:02:09.248 net/af_xdp: not in enabled drivers build config 00:02:09.248 net/ark: not in enabled drivers build config 00:02:09.248 net/atlantic: not in enabled drivers build config 00:02:09.248 net/avp: not in enabled drivers build config 00:02:09.248 net/axgbe: not in enabled drivers build config 00:02:09.248 net/bnx2x: not in enabled drivers build config 00:02:09.248 net/bnxt: not in enabled drivers build config 00:02:09.248 net/bonding: not in enabled drivers build config 00:02:09.248 net/cnxk: not in enabled drivers build config 00:02:09.248 net/cpfl: not in enabled drivers build config 00:02:09.248 net/cxgbe: not in enabled drivers build config 00:02:09.248 net/dpaa: not in enabled drivers build config 00:02:09.248 net/dpaa2: not in enabled drivers build config 00:02:09.248 net/e1000: not in enabled drivers build config 00:02:09.248 net/ena: not in enabled drivers build config 00:02:09.248 net/enetc: not in enabled drivers build config 00:02:09.248 net/enetfec: not in enabled drivers build config 00:02:09.248 net/enic: not in enabled drivers build config 00:02:09.248 net/failsafe: not in enabled drivers build config 00:02:09.248 net/fm10k: not in enabled drivers build config 00:02:09.248 net/gve: not in enabled drivers build config 00:02:09.248 net/hinic: not in enabled drivers build config 00:02:09.248 net/hns3: not in enabled drivers build config 00:02:09.248 net/iavf: not in enabled drivers build config 00:02:09.248 net/ice: not in enabled drivers build config 00:02:09.248 net/idpf: not in enabled drivers build config 00:02:09.248 net/igc: not in enabled drivers build config 00:02:09.248 net/ionic: not in enabled drivers build config 00:02:09.248 net/ipn3ke: not in enabled drivers build config 00:02:09.248 net/ixgbe: not in enabled drivers build config 00:02:09.248 net/mana: not in enabled drivers build config 00:02:09.248 net/memif: not in enabled drivers build config 00:02:09.248 net/mlx4: not in enabled drivers build config 00:02:09.248 net/mlx5: not in enabled drivers build config 00:02:09.248 net/mvneta: not in enabled drivers build config 00:02:09.248 net/mvpp2: not in enabled drivers build config 00:02:09.248 net/netvsc: not in enabled drivers build config 00:02:09.248 net/nfb: not in enabled drivers build config 00:02:09.248 net/nfp: not in enabled drivers build config 00:02:09.249 net/ngbe: not in enabled drivers build config 00:02:09.249 net/null: not in enabled drivers build config 00:02:09.249 net/octeontx: not in enabled drivers build config 00:02:09.249 net/octeon_ep: not in enabled drivers build config 00:02:09.249 net/pcap: not in enabled drivers build config 00:02:09.249 net/pfe: not in enabled drivers build config 00:02:09.249 net/qede: not in enabled drivers build config 00:02:09.249 net/ring: not in enabled drivers build config 00:02:09.249 net/sfc: not in enabled drivers build config 00:02:09.249 net/softnic: not in enabled drivers build config 00:02:09.249 net/tap: not in enabled drivers build config 00:02:09.249 net/thunderx: not in enabled drivers build config 00:02:09.249 net/txgbe: not in enabled drivers build config 00:02:09.249 net/vdev_netvsc: not in enabled drivers build config 00:02:09.249 net/vhost: not in enabled drivers build config 00:02:09.249 net/virtio: not in enabled drivers build config 00:02:09.249 net/vmxnet3: not in enabled drivers build config 00:02:09.249 raw/cnxk_bphy: not in enabled drivers build config 00:02:09.249 raw/cnxk_gpio: not in enabled drivers build config 00:02:09.249 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:09.249 raw/ifpga: not in enabled drivers build config 00:02:09.249 raw/ntb: not in enabled drivers build config 00:02:09.249 raw/skeleton: not in enabled drivers build config 00:02:09.249 crypto/armv8: not in enabled drivers build config 00:02:09.249 crypto/bcmfs: not in enabled drivers build config 00:02:09.249 crypto/caam_jr: not in enabled drivers build config 00:02:09.249 crypto/ccp: not in enabled drivers build config 00:02:09.249 crypto/cnxk: not in enabled drivers build config 00:02:09.249 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.249 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.249 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.249 crypto/mlx5: not in enabled drivers build config 00:02:09.249 crypto/mvsam: not in enabled drivers build config 00:02:09.249 crypto/nitrox: not in enabled drivers build config 00:02:09.249 crypto/null: not in enabled drivers build config 00:02:09.249 crypto/octeontx: not in enabled drivers build config 00:02:09.249 crypto/openssl: not in enabled drivers build config 00:02:09.249 crypto/scheduler: not in enabled drivers build config 00:02:09.249 crypto/uadk: not in enabled drivers build config 00:02:09.249 crypto/virtio: not in enabled drivers build config 00:02:09.249 compress/isal: not in enabled drivers build config 00:02:09.249 compress/mlx5: not in enabled drivers build config 00:02:09.249 compress/octeontx: not in enabled drivers build config 00:02:09.249 compress/zlib: not in enabled drivers build config 00:02:09.249 regex/mlx5: not in enabled drivers build config 00:02:09.249 regex/cn9k: not in enabled drivers build config 00:02:09.249 ml/cnxk: not in enabled drivers build config 00:02:09.249 vdpa/ifc: not in enabled drivers build config 00:02:09.249 vdpa/mlx5: not in enabled drivers build config 00:02:09.249 vdpa/nfp: not in enabled drivers build config 00:02:09.249 vdpa/sfc: not in enabled drivers build config 00:02:09.249 event/cnxk: not in enabled drivers build config 00:02:09.249 event/dlb2: not in enabled drivers build config 00:02:09.249 event/dpaa: not in enabled drivers build config 00:02:09.249 event/dpaa2: not in enabled drivers build config 00:02:09.249 event/dsw: not in enabled drivers build config 00:02:09.249 event/opdl: not in enabled drivers build config 00:02:09.249 event/skeleton: not in enabled drivers build config 00:02:09.249 event/sw: not in enabled drivers build config 00:02:09.249 event/octeontx: not in enabled drivers build config 00:02:09.249 baseband/acc: not in enabled drivers build config 00:02:09.249 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:09.249 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:09.249 baseband/la12xx: not in enabled drivers build config 00:02:09.249 baseband/null: not in enabled drivers build config 00:02:09.249 baseband/turbo_sw: not in enabled drivers build config 00:02:09.249 gpu/cuda: not in enabled drivers build config 00:02:09.249 00:02:09.249 00:02:09.249 Build targets in project: 217 00:02:09.249 00:02:09.249 DPDK 23.11.0 00:02:09.249 00:02:09.249 User defined options 00:02:09.249 libdir : lib 00:02:09.249 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:09.249 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:09.249 c_link_args : 00:02:09.249 enable_docs : false 00:02:09.249 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:09.249 enable_kmods : false 00:02:09.249 machine : native 00:02:09.249 tests : false 00:02:09.249 00:02:09.249 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.249 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:09.249 12:45:26 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:09.249 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:09.249 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.249 [2/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.249 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.509 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.509 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.509 [6/707] Linking static target lib/librte_kvargs.a 00:02:09.509 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.509 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.509 [9/707] Linking static target lib/librte_log.a 00:02:09.509 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.509 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.509 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.769 [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.769 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.769 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.769 [16/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.769 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.769 [18/707] Linking target lib/librte_log.so.24.0 00:02:09.769 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.028 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.028 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.028 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.028 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.028 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.029 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.029 [26/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.288 [27/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:10.288 [28/707] Linking static target lib/librte_telemetry.a 00:02:10.288 [29/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:10.288 [30/707] Linking target lib/librte_kvargs.so.24.0 00:02:10.288 [31/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.288 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.288 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.288 [34/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:10.548 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.548 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.548 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.548 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.548 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.548 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.548 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.548 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.548 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.548 [44/707] Linking target lib/librte_telemetry.so.24.0 00:02:10.808 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.808 [46/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:10.808 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.808 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.808 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.068 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.069 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.069 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.069 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.069 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.069 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.069 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.069 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.069 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.069 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.328 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.328 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.328 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.328 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.328 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.328 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.328 [66/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.328 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.328 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.588 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.588 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.588 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.588 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.588 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.588 [74/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.588 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.588 [76/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.588 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.588 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.848 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.848 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.848 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.848 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.107 [83/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.107 [84/707] Linking static target lib/librte_ring.a 00:02:12.107 [85/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.107 [86/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.108 [87/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.108 [88/707] Linking static target lib/librte_eal.a 00:02:12.108 [89/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.368 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.368 [91/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.368 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:12.368 [93/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:12.368 [94/707] Linking static target lib/librte_mempool.a 00:02:12.368 [95/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.628 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.628 [97/707] Linking static target lib/librte_rcu.a 00:02:12.628 [98/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:12.628 [99/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:12.628 [100/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:12.628 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:12.628 [102/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:12.887 [103/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:12.887 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:12.887 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.887 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.887 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:12.887 [108/707] Linking static target lib/librte_net.a 00:02:12.887 [109/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:12.887 [110/707] Linking static target lib/librte_mbuf.a 00:02:12.887 [111/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:12.887 [112/707] Linking static target lib/librte_meter.a 00:02:13.147 [113/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.147 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.147 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.147 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:13.147 [117/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.147 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:13.406 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.666 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:13.666 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:13.926 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:13.926 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:13.926 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.926 [125/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:13.926 [126/707] Linking static target lib/librte_pci.a 00:02:13.926 [127/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.926 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.196 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:14.196 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:14.196 [131/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.196 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:14.196 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.196 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:14.196 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:14.196 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.196 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.196 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.196 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.471 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.471 [141/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:14.471 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.471 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:14.471 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.471 [145/707] Linking static target lib/librte_cmdline.a 00:02:14.731 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:14.731 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:14.731 [148/707] Linking static target lib/librte_metrics.a 00:02:14.731 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.991 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.991 [151/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:14.991 [152/707] Linking static target lib/librte_timer.a 00:02:14.991 [153/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.991 [154/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.251 [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.251 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.510 [157/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:15.511 [158/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:15.511 [159/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:15.511 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:15.770 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:15.770 [162/707] Linking static target lib/librte_bitratestats.a 00:02:16.030 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:16.030 [164/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.030 [165/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:16.030 [166/707] Linking static target lib/librte_bbdev.a 00:02:16.290 [167/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:16.290 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:16.549 [169/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:16.550 [170/707] Linking static target lib/librte_hash.a 00:02:16.550 [171/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.550 [172/707] Linking static target lib/librte_ethdev.a 00:02:16.550 [173/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:16.550 [174/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:16.550 [175/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.810 [176/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:16.810 [177/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:16.810 [178/707] Linking static target lib/acl/libavx2_tmp.a 00:02:16.810 [179/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:16.810 [180/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.069 [181/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:17.070 [182/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.070 [183/707] Linking target lib/librte_eal.so.24.0 00:02:17.070 [184/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:17.070 [185/707] Linking static target lib/librte_cfgfile.a 00:02:17.070 [186/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:17.070 [187/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:17.330 [188/707] Linking target lib/librte_ring.so.24.0 00:02:17.330 [189/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:17.330 [190/707] Linking target lib/librte_meter.so.24.0 00:02:17.330 [191/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:17.330 [192/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:17.330 [193/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.330 [194/707] Linking target lib/librte_pci.so.24.0 00:02:17.330 [195/707] Linking target lib/librte_rcu.so.24.0 00:02:17.330 [196/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.330 [197/707] Linking target lib/librte_mempool.so.24.0 00:02:17.330 [198/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.330 [199/707] Linking target lib/librte_timer.so.24.0 00:02:17.330 [200/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:17.330 [201/707] Linking target lib/librte_cfgfile.so.24.0 00:02:17.330 [202/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:17.330 [203/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:17.590 [204/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:17.590 [205/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:17.590 [206/707] Linking target lib/librte_mbuf.so.24.0 00:02:17.590 [207/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:17.590 [208/707] Linking static target lib/librte_bpf.a 00:02:17.590 [209/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.590 [210/707] Linking static target lib/librte_compressdev.a 00:02:17.590 [211/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:17.590 [212/707] Linking target lib/librte_net.so.24.0 00:02:17.590 [213/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:17.590 [214/707] Linking static target lib/librte_acl.a 00:02:17.849 [215/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:17.849 [216/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:17.849 [217/707] Linking target lib/librte_cmdline.so.24.0 00:02:17.849 [218/707] Linking target lib/librte_hash.so.24.0 00:02:17.849 [219/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.849 [220/707] Linking target lib/librte_bbdev.so.24.0 00:02:17.849 [221/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:17.849 [222/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.849 [223/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.849 [224/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:17.849 [225/707] Linking target lib/librte_acl.so.24.0 00:02:18.110 [226/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.110 [227/707] Linking target lib/librte_compressdev.so.24.0 00:02:18.110 [228/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:18.110 [229/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:18.110 [230/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:18.110 [231/707] Linking static target lib/librte_distributor.a 00:02:18.110 [232/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.371 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:18.371 [234/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.371 [235/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.371 [236/707] Linking static target lib/librte_dmadev.a 00:02:18.371 [237/707] Linking target lib/librte_distributor.so.24.0 00:02:18.632 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:18.632 [239/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.632 [240/707] Linking target lib/librte_dmadev.so.24.0 00:02:18.892 [241/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:18.892 [242/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:18.892 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:19.151 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:19.151 [245/707] Linking static target lib/librte_efd.a 00:02:19.151 [246/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:19.151 [247/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.151 [248/707] Linking static target lib/librte_cryptodev.a 00:02:19.151 [249/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.411 [250/707] Linking target lib/librte_efd.so.24.0 00:02:19.411 [251/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:19.411 [252/707] Linking static target lib/librte_dispatcher.a 00:02:19.411 [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:19.411 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:19.411 [255/707] Linking static target lib/librte_gpudev.a 00:02:19.671 [256/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:19.671 [257/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:19.671 [258/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.671 [259/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:19.931 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:20.191 [261/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:20.191 [262/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:20.191 [263/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.191 [264/707] Linking target lib/librte_gpudev.so.24.0 00:02:20.191 [265/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:20.191 [266/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:20.191 [267/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.191 [268/707] Linking target lib/librte_cryptodev.so.24.0 00:02:20.191 [269/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:20.191 [270/707] Linking static target lib/librte_gro.a 00:02:20.191 [271/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.451 [272/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:20.451 [273/707] Linking target lib/librte_ethdev.so.24.0 00:02:20.451 [274/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:20.451 [275/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:20.451 [276/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:20.451 [277/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.451 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:20.451 [279/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:20.451 [280/707] Linking target lib/librte_metrics.so.24.0 00:02:20.451 [281/707] Linking target lib/librte_gro.so.24.0 00:02:20.451 [282/707] Linking target lib/librte_bpf.so.24.0 00:02:20.451 [283/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:20.451 [284/707] Linking static target lib/librte_gso.a 00:02:20.711 [285/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:20.711 [286/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:20.711 [287/707] Linking target lib/librte_bitratestats.so.24.0 00:02:20.711 [288/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:20.711 [289/707] Linking static target lib/librte_eventdev.a 00:02:20.711 [290/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.711 [291/707] Linking target lib/librte_gso.so.24.0 00:02:20.711 [292/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:20.711 [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:20.711 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:20.711 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:20.970 [296/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:20.970 [297/707] Linking static target lib/librte_jobstats.a 00:02:20.970 [298/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:20.970 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:20.970 [300/707] Linking static target lib/librte_ip_frag.a 00:02:20.970 [301/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:20.970 [302/707] Linking static target lib/librte_latencystats.a 00:02:21.230 [303/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.230 [304/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.230 [305/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:21.230 [306/707] Linking target lib/librte_jobstats.so.24.0 00:02:21.230 [307/707] Linking target lib/librte_ip_frag.so.24.0 00:02:21.230 [308/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.230 [309/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:21.230 [310/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:21.230 [311/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:21.230 [312/707] Linking target lib/librte_latencystats.so.24.0 00:02:21.490 [313/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:21.490 [314/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.490 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.490 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.490 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:21.490 [318/707] Linking static target lib/librte_lpm.a 00:02:21.750 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:21.750 [320/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.750 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.750 [322/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.750 [323/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:21.750 [324/707] Linking static target lib/librte_pcapng.a 00:02:22.010 [325/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.010 [326/707] Linking target lib/librte_lpm.so.24.0 00:02:22.010 [327/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:22.010 [328/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.010 [329/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:22.010 [330/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.010 [331/707] Linking target lib/librte_pcapng.so.24.0 00:02:22.271 [332/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.271 [333/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.271 [334/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.271 [335/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:22.271 [336/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.271 [337/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.271 [338/707] Linking static target lib/librte_power.a 00:02:22.271 [339/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:22.271 [340/707] Linking target lib/librte_eventdev.so.24.0 00:02:22.531 [341/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:22.531 [342/707] Linking static target lib/librte_regexdev.a 00:02:22.531 [343/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:22.531 [344/707] Linking target lib/librte_dispatcher.so.24.0 00:02:22.531 [345/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:22.531 [346/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:22.531 [347/707] Linking static target lib/librte_rawdev.a 00:02:22.531 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:22.531 [349/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:22.531 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:22.531 [351/707] Linking static target lib/librte_member.a 00:02:22.791 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:22.791 [353/707] Linking static target lib/librte_mldev.a 00:02:22.791 [354/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.791 [355/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:22.791 [356/707] Linking target lib/librte_power.so.24.0 00:02:22.791 [357/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.051 [358/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:23.051 [359/707] Linking target lib/librte_member.so.24.0 00:02:23.051 [360/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.051 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:23.051 [362/707] Linking target lib/librte_rawdev.so.24.0 00:02:23.051 [363/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.052 [364/707] Linking target lib/librte_regexdev.so.24.0 00:02:23.052 [365/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:23.052 [366/707] Linking static target lib/librte_reorder.a 00:02:23.312 [367/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:23.312 [368/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:23.312 [369/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:23.312 [370/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:23.312 [371/707] Linking static target lib/librte_rib.a 00:02:23.312 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:23.312 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:23.312 [374/707] Linking static target lib/librte_stack.a 00:02:23.312 [375/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.312 [376/707] Linking target lib/librte_reorder.so.24.0 00:02:23.572 [377/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.572 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:23.572 [379/707] Linking static target lib/librte_security.a 00:02:23.572 [380/707] Linking target lib/librte_stack.so.24.0 00:02:23.572 [381/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:23.572 [382/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.572 [383/707] Linking target lib/librte_rib.so.24.0 00:02:23.572 [384/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:23.572 [385/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.572 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:23.832 [387/707] Linking target lib/librte_mldev.so.24.0 00:02:23.832 [388/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:23.832 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.832 [390/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:23.832 [391/707] Linking target lib/librte_security.so.24.0 00:02:23.832 [392/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:23.832 [393/707] Linking static target lib/librte_sched.a 00:02:23.832 [394/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:24.092 [395/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.092 [396/707] Linking target lib/librte_sched.so.24.0 00:02:24.092 [397/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:24.352 [398/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:24.352 [399/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:24.352 [400/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:24.612 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:24.612 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:24.612 [403/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:24.873 [404/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:24.873 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:24.873 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:24.873 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:25.133 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:25.133 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:25.133 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:25.133 [411/707] Linking static target lib/librte_ipsec.a 00:02:25.133 [412/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:25.394 [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:25.394 [414/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:25.394 [415/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:25.394 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.394 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:25.654 [418/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:25.654 [419/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:25.914 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:25.914 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:25.914 [422/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:25.914 [423/707] Linking static target lib/librte_fib.a 00:02:25.914 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:25.914 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:26.175 [426/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:26.175 [427/707] Linking static target lib/librte_pdcp.a 00:02:26.175 [428/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.175 [429/707] Linking target lib/librte_fib.so.24.0 00:02:26.175 [430/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:26.175 [431/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:26.175 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:26.435 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.435 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:26.694 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:26.694 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:26.694 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:26.694 [438/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:26.952 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:26.952 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:26.952 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:26.952 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:27.210 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:27.210 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:27.210 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:27.211 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:27.469 [447/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:27.469 [448/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:27.469 [449/707] Linking static target lib/librte_port.a 00:02:27.469 [450/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:27.469 [451/707] Linking static target lib/librte_pdump.a 00:02:27.469 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:27.729 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:27.729 [454/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.729 [455/707] Linking target lib/librte_pdump.so.24.0 00:02:27.729 [456/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.035 [457/707] Linking target lib/librte_port.so.24.0 00:02:28.035 [458/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:28.035 [459/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:28.036 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:28.036 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:28.036 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:28.036 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:28.294 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:28.294 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:28.294 [466/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.294 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:28.294 [468/707] Linking static target lib/librte_table.a 00:02:28.294 [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:28.553 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:28.811 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:28.811 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.811 [473/707] Linking target lib/librte_table.so.24.0 00:02:28.811 [474/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:29.069 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:29.069 [476/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:29.069 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:29.328 [478/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:29.328 [479/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:29.328 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:29.328 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:29.328 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:29.588 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:29.588 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:29.848 [485/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:29.848 [486/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:29.848 [487/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:29.848 [488/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:29.848 [489/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:29.848 [490/707] Linking static target lib/librte_graph.a 00:02:30.108 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:30.368 [492/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:30.368 [493/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.628 [494/707] Linking target lib/librte_graph.so.24.0 00:02:30.628 [495/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:30.628 [496/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:30.628 [497/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:30.628 [498/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:30.628 [499/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:30.628 [500/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:30.628 [501/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:30.888 [502/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:30.888 [503/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:30.888 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:31.153 [505/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:31.153 [506/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:31.153 [507/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:31.153 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:31.153 [509/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:31.153 [510/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:31.153 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:31.153 [512/707] Linking static target lib/librte_node.a 00:02:31.427 [513/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:31.427 [514/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.427 [515/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:31.735 [516/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:31.735 [517/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:31.735 [518/707] Linking target lib/librte_node.so.24.0 00:02:31.735 [519/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:31.735 [520/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.735 [521/707] Linking static target drivers/librte_bus_vdev.a 00:02:31.735 [522/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:31.735 [523/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.735 [524/707] Linking static target drivers/librte_bus_pci.a 00:02:31.735 [525/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:32.003 [526/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.003 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.004 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:32.004 [529/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.004 [530/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:32.004 [531/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.004 [532/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.004 [533/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:32.004 [534/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:32.264 [535/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.264 [536/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.264 [537/707] Linking static target drivers/librte_mempool_ring.a 00:02:32.264 [538/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.264 [539/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.264 [540/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:32.264 [541/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:32.264 [542/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:32.264 [543/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:32.523 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:32.783 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:32.783 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:32.783 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:33.355 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:33.615 [549/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:33.615 [550/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:33.615 [551/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:33.615 [552/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:33.875 [553/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:33.875 [554/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:33.875 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:34.136 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:34.136 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:34.136 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:34.396 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:34.396 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:34.657 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:34.657 [562/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:34.657 [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:34.917 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:34.917 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:34.917 [566/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:35.177 [567/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:35.177 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:35.177 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:35.177 [570/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:35.437 [571/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:35.437 [572/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:35.437 [573/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:35.437 [574/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:35.437 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:35.697 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:35.697 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:35.697 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:35.697 [579/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:35.697 [580/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:35.957 [581/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:35.957 [582/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:35.957 [583/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:35.957 [584/707] Linking static target drivers/librte_net_i40e.a 00:02:36.217 [585/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:36.217 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:36.217 [587/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:36.217 [588/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:36.217 [589/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:36.478 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:36.478 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.738 [592/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:36.738 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:36.738 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:36.738 [595/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:36.999 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:36.999 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:36.999 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:37.259 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:37.259 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:37.259 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:37.519 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:37.519 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:37.519 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:37.519 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:37.779 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:37.779 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:37.779 [608/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:37.779 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:37.779 [610/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:37.779 [611/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:38.039 [612/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:38.039 [613/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:38.039 [614/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:38.298 [615/707] Linking static target lib/librte_vhost.a 00:02:38.298 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:38.298 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:38.558 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:38.818 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:38.818 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:39.078 [621/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.078 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:39.078 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:39.078 [624/707] Linking target lib/librte_vhost.so.24.0 00:02:39.338 [625/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:39.338 [626/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:39.338 [627/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:39.338 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:39.338 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:39.598 [630/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:39.598 [631/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:39.598 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:39.598 [633/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:39.858 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:39.858 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:39.858 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:39.858 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:40.118 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:40.118 [639/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:40.118 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:40.118 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:40.118 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:40.378 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:40.378 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:40.378 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:40.378 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:40.638 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:40.638 [648/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:40.638 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:40.638 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:40.898 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:40.898 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:40.898 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:41.158 [654/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:41.158 [655/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:41.158 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:41.418 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:41.418 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:41.418 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:41.688 [660/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:41.949 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:41.949 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:41.949 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:41.949 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:42.209 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:42.209 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:42.209 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:42.209 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:42.468 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:42.728 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:42.728 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:42.728 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:42.988 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:43.248 [674/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:43.248 [675/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:43.248 [676/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:43.508 [677/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:43.508 [678/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:43.508 [679/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:43.508 [680/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:43.508 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:43.768 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:44.027 [683/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:44.027 [684/707] Linking static target lib/librte_pipeline.a 00:02:44.287 [685/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:44.547 [686/707] Linking target app/dpdk-test-cmdline 00:02:44.547 [687/707] Linking target app/dpdk-proc-info 00:02:44.547 [688/707] Linking target app/dpdk-graph 00:02:44.547 [689/707] Linking target app/dpdk-pdump 00:02:44.547 [690/707] Linking target app/dpdk-test-acl 00:02:44.547 [691/707] Linking target app/dpdk-dumpcap 00:02:44.547 [692/707] Linking target app/dpdk-test-compress-perf 00:02:44.547 [693/707] Linking target app/dpdk-test-bbdev 00:02:44.547 [694/707] Linking target app/dpdk-test-crypto-perf 00:02:44.808 [695/707] Linking target app/dpdk-test-eventdev 00:02:44.808 [696/707] Linking target app/dpdk-test-dma-perf 00:02:45.068 [697/707] Linking target app/dpdk-test-fib 00:02:45.068 [698/707] Linking target app/dpdk-test-gpudev 00:02:45.068 [699/707] Linking target app/dpdk-test-flow-perf 00:02:45.068 [700/707] Linking target app/dpdk-test-mldev 00:02:45.068 [701/707] Linking target app/dpdk-test-pipeline 00:02:45.068 [702/707] Linking target app/dpdk-test-regex 00:02:45.068 [703/707] Linking target app/dpdk-testpmd 00:02:45.328 [704/707] Linking target app/dpdk-test-sad 00:02:45.328 [705/707] Linking target app/dpdk-test-security-perf 00:02:50.656 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.656 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:50.656 12:46:07 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:50.656 12:46:07 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:50.656 12:46:07 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:50.656 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:50.656 [0/1] Installing files. 00:02:50.656 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.656 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.657 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.658 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.659 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.660 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.660 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.660 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.661 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.924 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.924 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.924 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.924 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.924 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.924 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.924 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.924 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.924 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.924 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.924 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.924 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.925 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.926 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:50.927 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:50.927 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:50.927 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:50.927 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:50.927 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:50.927 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:50.927 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:50.927 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:50.927 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:50.927 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:50.927 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:50.927 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:50.927 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:50.928 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:50.928 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:50.928 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:50.928 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:50.928 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:50.928 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:50.928 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:50.928 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:50.928 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:50.928 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:50.928 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:50.928 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:50.928 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:50.928 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:50.928 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:50.928 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:50.928 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:50.928 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:50.928 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:50.928 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:50.928 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:50.928 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:50.928 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:50.928 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:50.928 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:50.928 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:50.928 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:50.928 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:50.928 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:50.928 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:50.928 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:50.928 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:50.928 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:50.928 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:50.928 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:50.928 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:50.928 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:50.928 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:50.928 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:50.928 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:50.928 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:50.928 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:50.928 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:50.928 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:50.928 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:50.928 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:50.928 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:50.928 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:50.928 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:50.928 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:50.928 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:50.928 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:50.928 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:50.928 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:50.928 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:50.928 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:50.928 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:50.928 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:50.928 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:50.928 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:50.928 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:50.928 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:50.928 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:50.928 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:50.928 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:50.928 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:50.928 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:50.928 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:50.928 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:50.928 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:50.928 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:50.928 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:50.928 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:50.928 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:50.928 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:50.928 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:50.928 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:50.928 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:50.928 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:50.928 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:50.928 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:50.928 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:50.928 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:50.928 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:50.928 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:50.928 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:50.928 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:50.928 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:50.928 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:50.928 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:50.928 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:50.928 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:50.928 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:50.928 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:50.928 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:50.928 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:50.928 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:50.928 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:50.928 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:50.928 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:50.928 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:50.928 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:50.928 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:50.928 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:50.928 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:50.928 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:50.928 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:50.928 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:50.928 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:50.929 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:50.929 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:50.929 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:50.929 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:50.929 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:50.929 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:50.929 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:50.929 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:50.929 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:50.929 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:50.929 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:50.929 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:50.929 12:46:08 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:50.929 ************************************ 00:02:50.929 END TEST build_native_dpdk 00:02:50.929 ************************************ 00:02:50.929 12:46:08 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:50.929 00:02:50.929 real 0m49.732s 00:02:50.929 user 4m59.049s 00:02:50.929 sys 0m58.279s 00:02:50.929 12:46:08 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:50.929 12:46:08 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:50.929 12:46:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:50.929 12:46:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:50.929 12:46:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:50.929 12:46:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:50.929 12:46:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:50.929 12:46:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:50.929 12:46:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:50.929 12:46:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:51.189 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:51.449 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.449 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:51.449 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:51.708 Using 'verbs' RDMA provider 00:03:07.992 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:26.098 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:26.098 Creating mk/config.mk...done. 00:03:26.098 Creating mk/cc.flags.mk...done. 00:03:26.098 Type 'make' to build. 00:03:26.098 12:46:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:26.098 12:46:41 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:26.098 12:46:41 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:26.098 12:46:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.098 ************************************ 00:03:26.098 START TEST make 00:03:26.098 ************************************ 00:03:26.098 12:46:41 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:26.098 make[1]: Nothing to be done for 'all'. 00:04:04.847 CC lib/ut_mock/mock.o 00:04:04.847 CC lib/log/log.o 00:04:04.847 CC lib/log/log_flags.o 00:04:04.847 CC lib/log/log_deprecated.o 00:04:04.847 CC lib/ut/ut.o 00:04:04.847 LIB libspdk_ut_mock.a 00:04:04.847 LIB libspdk_log.a 00:04:04.847 SO libspdk_ut_mock.so.6.0 00:04:04.847 LIB libspdk_ut.a 00:04:04.847 SO libspdk_log.so.7.0 00:04:04.847 SO libspdk_ut.so.2.0 00:04:04.847 SYMLINK libspdk_ut_mock.so 00:04:04.847 SYMLINK libspdk_log.so 00:04:04.847 SYMLINK libspdk_ut.so 00:04:05.106 CC lib/util/bit_array.o 00:04:05.106 CC lib/util/base64.o 00:04:05.106 CC lib/util/crc16.o 00:04:05.106 CC lib/util/cpuset.o 00:04:05.106 CC lib/util/crc32.o 00:04:05.106 CC lib/util/crc32c.o 00:04:05.106 CC lib/dma/dma.o 00:04:05.106 CC lib/ioat/ioat.o 00:04:05.106 CXX lib/trace_parser/trace.o 00:04:05.365 CC lib/vfio_user/host/vfio_user_pci.o 00:04:05.365 CC lib/util/crc32_ieee.o 00:04:05.365 CC lib/util/crc64.o 00:04:05.365 CC lib/util/dif.o 00:04:05.365 CC lib/util/fd.o 00:04:05.365 LIB libspdk_dma.a 00:04:05.365 CC lib/util/fd_group.o 00:04:05.365 CC lib/util/file.o 00:04:05.365 SO libspdk_dma.so.5.0 00:04:05.365 CC lib/util/hexlify.o 00:04:05.365 CC lib/util/iov.o 00:04:05.365 SYMLINK libspdk_dma.so 00:04:05.365 CC lib/util/math.o 00:04:05.365 CC lib/util/net.o 00:04:05.365 LIB libspdk_ioat.a 00:04:05.365 SO libspdk_ioat.so.7.0 00:04:05.624 CC lib/util/pipe.o 00:04:05.624 CC lib/vfio_user/host/vfio_user.o 00:04:05.624 SYMLINK libspdk_ioat.so 00:04:05.624 CC lib/util/strerror_tls.o 00:04:05.624 CC lib/util/string.o 00:04:05.624 CC lib/util/uuid.o 00:04:05.624 CC lib/util/xor.o 00:04:05.624 CC lib/util/zipf.o 00:04:05.624 CC lib/util/md5.o 00:04:05.624 LIB libspdk_vfio_user.a 00:04:05.883 SO libspdk_vfio_user.so.5.0 00:04:05.883 SYMLINK libspdk_vfio_user.so 00:04:05.883 LIB libspdk_util.a 00:04:05.883 SO libspdk_util.so.10.0 00:04:06.141 LIB libspdk_trace_parser.a 00:04:06.141 SYMLINK libspdk_util.so 00:04:06.141 SO libspdk_trace_parser.so.6.0 00:04:06.141 SYMLINK libspdk_trace_parser.so 00:04:06.141 CC lib/conf/conf.o 00:04:06.141 CC lib/rdma_provider/common.o 00:04:06.141 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:06.141 CC lib/json/json_parse.o 00:04:06.141 CC lib/json/json_util.o 00:04:06.141 CC lib/rdma_utils/rdma_utils.o 00:04:06.141 CC lib/json/json_write.o 00:04:06.141 CC lib/vmd/vmd.o 00:04:06.141 CC lib/env_dpdk/env.o 00:04:06.141 CC lib/idxd/idxd.o 00:04:06.401 CC lib/vmd/led.o 00:04:06.401 LIB libspdk_rdma_provider.a 00:04:06.401 LIB libspdk_conf.a 00:04:06.401 SO libspdk_rdma_provider.so.6.0 00:04:06.401 SO libspdk_conf.so.6.0 00:04:06.401 CC lib/env_dpdk/memory.o 00:04:06.401 SYMLINK libspdk_rdma_provider.so 00:04:06.401 CC lib/idxd/idxd_user.o 00:04:06.401 LIB libspdk_rdma_utils.a 00:04:06.401 CC lib/idxd/idxd_kernel.o 00:04:06.401 SYMLINK libspdk_conf.so 00:04:06.401 CC lib/env_dpdk/pci.o 00:04:06.401 SO libspdk_rdma_utils.so.1.0 00:04:06.401 LIB libspdk_json.a 00:04:06.659 CC lib/env_dpdk/init.o 00:04:06.659 SYMLINK libspdk_rdma_utils.so 00:04:06.659 SO libspdk_json.so.6.0 00:04:06.659 CC lib/env_dpdk/threads.o 00:04:06.659 SYMLINK libspdk_json.so 00:04:06.659 CC lib/env_dpdk/pci_ioat.o 00:04:06.659 CC lib/env_dpdk/pci_virtio.o 00:04:06.660 CC lib/env_dpdk/pci_vmd.o 00:04:06.660 CC lib/env_dpdk/pci_idxd.o 00:04:06.660 CC lib/env_dpdk/pci_event.o 00:04:06.919 CC lib/env_dpdk/sigbus_handler.o 00:04:06.919 CC lib/jsonrpc/jsonrpc_server.o 00:04:06.919 CC lib/env_dpdk/pci_dpdk.o 00:04:06.919 LIB libspdk_idxd.a 00:04:06.919 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:06.919 LIB libspdk_vmd.a 00:04:06.919 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:06.919 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:06.919 SO libspdk_idxd.so.12.1 00:04:06.919 SO libspdk_vmd.so.6.0 00:04:06.919 CC lib/jsonrpc/jsonrpc_client.o 00:04:06.919 SYMLINK libspdk_vmd.so 00:04:06.919 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:06.919 SYMLINK libspdk_idxd.so 00:04:07.179 LIB libspdk_jsonrpc.a 00:04:07.179 SO libspdk_jsonrpc.so.6.0 00:04:07.179 SYMLINK libspdk_jsonrpc.so 00:04:07.748 CC lib/rpc/rpc.o 00:04:07.748 LIB libspdk_env_dpdk.a 00:04:07.748 SO libspdk_env_dpdk.so.15.0 00:04:08.008 LIB libspdk_rpc.a 00:04:08.008 SO libspdk_rpc.so.6.0 00:04:08.008 SYMLINK libspdk_env_dpdk.so 00:04:08.008 SYMLINK libspdk_rpc.so 00:04:08.579 CC lib/notify/notify.o 00:04:08.579 CC lib/notify/notify_rpc.o 00:04:08.579 CC lib/keyring/keyring.o 00:04:08.579 CC lib/keyring/keyring_rpc.o 00:04:08.579 CC lib/trace/trace.o 00:04:08.579 CC lib/trace/trace_flags.o 00:04:08.579 CC lib/trace/trace_rpc.o 00:04:08.579 LIB libspdk_notify.a 00:04:08.580 SO libspdk_notify.so.6.0 00:04:08.580 LIB libspdk_keyring.a 00:04:08.580 SO libspdk_keyring.so.2.0 00:04:08.580 SYMLINK libspdk_notify.so 00:04:08.580 LIB libspdk_trace.a 00:04:08.839 SO libspdk_trace.so.11.0 00:04:08.839 SYMLINK libspdk_keyring.so 00:04:08.839 SYMLINK libspdk_trace.so 00:04:09.100 CC lib/thread/thread.o 00:04:09.100 CC lib/thread/iobuf.o 00:04:09.100 CC lib/sock/sock.o 00:04:09.100 CC lib/sock/sock_rpc.o 00:04:09.670 LIB libspdk_sock.a 00:04:09.670 SO libspdk_sock.so.10.0 00:04:09.670 SYMLINK libspdk_sock.so 00:04:10.240 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:10.240 CC lib/nvme/nvme_ctrlr.o 00:04:10.240 CC lib/nvme/nvme_fabric.o 00:04:10.240 CC lib/nvme/nvme_ns.o 00:04:10.240 CC lib/nvme/nvme_ns_cmd.o 00:04:10.240 CC lib/nvme/nvme_pcie_common.o 00:04:10.240 CC lib/nvme/nvme_pcie.o 00:04:10.240 CC lib/nvme/nvme_qpair.o 00:04:10.240 CC lib/nvme/nvme.o 00:04:10.499 LIB libspdk_thread.a 00:04:10.759 SO libspdk_thread.so.10.1 00:04:10.759 CC lib/nvme/nvme_quirks.o 00:04:10.759 SYMLINK libspdk_thread.so 00:04:10.759 CC lib/nvme/nvme_transport.o 00:04:10.759 CC lib/nvme/nvme_discovery.o 00:04:10.759 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:10.759 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:10.759 CC lib/nvme/nvme_tcp.o 00:04:11.018 CC lib/nvme/nvme_opal.o 00:04:11.019 CC lib/nvme/nvme_io_msg.o 00:04:11.278 CC lib/nvme/nvme_poll_group.o 00:04:11.278 CC lib/nvme/nvme_zns.o 00:04:11.278 CC lib/nvme/nvme_stubs.o 00:04:11.278 CC lib/nvme/nvme_auth.o 00:04:11.278 CC lib/nvme/nvme_cuse.o 00:04:11.538 CC lib/nvme/nvme_rdma.o 00:04:11.538 CC lib/accel/accel.o 00:04:11.797 CC lib/blob/blobstore.o 00:04:11.797 CC lib/blob/request.o 00:04:11.797 CC lib/virtio/virtio.o 00:04:11.797 CC lib/init/json_config.o 00:04:12.056 CC lib/init/subsystem.o 00:04:12.056 CC lib/virtio/virtio_vhost_user.o 00:04:12.056 CC lib/init/subsystem_rpc.o 00:04:12.056 CC lib/init/rpc.o 00:04:12.056 CC lib/accel/accel_rpc.o 00:04:12.056 CC lib/accel/accel_sw.o 00:04:12.316 CC lib/virtio/virtio_vfio_user.o 00:04:12.316 LIB libspdk_init.a 00:04:12.316 CC lib/blob/zeroes.o 00:04:12.316 SO libspdk_init.so.6.0 00:04:12.316 CC lib/blob/blob_bs_dev.o 00:04:12.316 SYMLINK libspdk_init.so 00:04:12.316 CC lib/virtio/virtio_pci.o 00:04:12.316 CC lib/fsdev/fsdev.o 00:04:12.316 CC lib/fsdev/fsdev_io.o 00:04:12.316 CC lib/fsdev/fsdev_rpc.o 00:04:12.575 CC lib/event/app.o 00:04:12.575 CC lib/event/reactor.o 00:04:12.575 CC lib/event/log_rpc.o 00:04:12.575 CC lib/event/app_rpc.o 00:04:12.575 LIB libspdk_virtio.a 00:04:12.575 SO libspdk_virtio.so.7.0 00:04:12.575 LIB libspdk_accel.a 00:04:12.575 CC lib/event/scheduler_static.o 00:04:12.836 SYMLINK libspdk_virtio.so 00:04:12.836 SO libspdk_accel.so.16.0 00:04:12.836 SYMLINK libspdk_accel.so 00:04:12.836 LIB libspdk_nvme.a 00:04:12.836 LIB libspdk_event.a 00:04:13.095 SO libspdk_nvme.so.14.0 00:04:13.095 SO libspdk_event.so.14.0 00:04:13.095 CC lib/bdev/bdev.o 00:04:13.095 CC lib/bdev/bdev_rpc.o 00:04:13.095 CC lib/bdev/scsi_nvme.o 00:04:13.095 CC lib/bdev/part.o 00:04:13.095 CC lib/bdev/bdev_zone.o 00:04:13.095 SYMLINK libspdk_event.so 00:04:13.095 LIB libspdk_fsdev.a 00:04:13.095 SO libspdk_fsdev.so.1.0 00:04:13.095 SYMLINK libspdk_fsdev.so 00:04:13.355 SYMLINK libspdk_nvme.so 00:04:13.614 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:14.184 LIB libspdk_fuse_dispatcher.a 00:04:14.184 SO libspdk_fuse_dispatcher.so.1.0 00:04:14.444 SYMLINK libspdk_fuse_dispatcher.so 00:04:15.013 LIB libspdk_blob.a 00:04:15.013 SO libspdk_blob.so.11.0 00:04:15.273 SYMLINK libspdk_blob.so 00:04:15.533 CC lib/blobfs/tree.o 00:04:15.533 CC lib/blobfs/blobfs.o 00:04:15.533 CC lib/lvol/lvol.o 00:04:15.533 LIB libspdk_bdev.a 00:04:15.533 SO libspdk_bdev.so.16.0 00:04:15.793 SYMLINK libspdk_bdev.so 00:04:16.054 CC lib/ublk/ublk.o 00:04:16.054 CC lib/ublk/ublk_rpc.o 00:04:16.054 CC lib/scsi/dev.o 00:04:16.054 CC lib/scsi/lun.o 00:04:16.054 CC lib/scsi/port.o 00:04:16.054 CC lib/nbd/nbd.o 00:04:16.054 CC lib/nvmf/ctrlr.o 00:04:16.054 CC lib/ftl/ftl_core.o 00:04:16.054 CC lib/nvmf/ctrlr_discovery.o 00:04:16.054 CC lib/nvmf/ctrlr_bdev.o 00:04:16.313 CC lib/nvmf/subsystem.o 00:04:16.313 CC lib/scsi/scsi.o 00:04:16.313 LIB libspdk_blobfs.a 00:04:16.313 SO libspdk_blobfs.so.10.0 00:04:16.313 SYMLINK libspdk_blobfs.so 00:04:16.313 CC lib/scsi/scsi_bdev.o 00:04:16.313 CC lib/nbd/nbd_rpc.o 00:04:16.313 CC lib/ftl/ftl_init.o 00:04:16.313 CC lib/ftl/ftl_layout.o 00:04:16.574 LIB libspdk_lvol.a 00:04:16.574 SO libspdk_lvol.so.10.0 00:04:16.574 LIB libspdk_nbd.a 00:04:16.574 SO libspdk_nbd.so.7.0 00:04:16.574 SYMLINK libspdk_lvol.so 00:04:16.574 CC lib/nvmf/nvmf.o 00:04:16.574 LIB libspdk_ublk.a 00:04:16.574 CC lib/nvmf/nvmf_rpc.o 00:04:16.574 SYMLINK libspdk_nbd.so 00:04:16.574 CC lib/nvmf/transport.o 00:04:16.574 SO libspdk_ublk.so.3.0 00:04:16.574 CC lib/ftl/ftl_debug.o 00:04:16.574 SYMLINK libspdk_ublk.so 00:04:16.574 CC lib/nvmf/tcp.o 00:04:16.834 CC lib/scsi/scsi_pr.o 00:04:16.834 CC lib/scsi/scsi_rpc.o 00:04:16.834 CC lib/scsi/task.o 00:04:16.834 CC lib/ftl/ftl_io.o 00:04:16.834 CC lib/ftl/ftl_sb.o 00:04:17.094 CC lib/ftl/ftl_l2p.o 00:04:17.094 LIB libspdk_scsi.a 00:04:17.094 CC lib/ftl/ftl_l2p_flat.o 00:04:17.094 CC lib/ftl/ftl_nv_cache.o 00:04:17.094 SO libspdk_scsi.so.9.0 00:04:17.354 SYMLINK libspdk_scsi.so 00:04:17.354 CC lib/ftl/ftl_band.o 00:04:17.354 CC lib/nvmf/stubs.o 00:04:17.354 CC lib/nvmf/mdns_server.o 00:04:17.354 CC lib/nvmf/rdma.o 00:04:17.354 CC lib/nvmf/auth.o 00:04:17.614 CC lib/iscsi/conn.o 00:04:17.614 CC lib/ftl/ftl_band_ops.o 00:04:17.614 CC lib/vhost/vhost.o 00:04:17.614 CC lib/vhost/vhost_rpc.o 00:04:17.614 CC lib/vhost/vhost_scsi.o 00:04:17.874 CC lib/vhost/vhost_blk.o 00:04:17.874 CC lib/vhost/rte_vhost_user.o 00:04:18.134 CC lib/iscsi/init_grp.o 00:04:18.134 CC lib/iscsi/iscsi.o 00:04:18.134 CC lib/iscsi/param.o 00:04:18.134 CC lib/iscsi/portal_grp.o 00:04:18.134 CC lib/ftl/ftl_writer.o 00:04:18.394 CC lib/iscsi/tgt_node.o 00:04:18.394 CC lib/iscsi/iscsi_subsystem.o 00:04:18.394 CC lib/iscsi/iscsi_rpc.o 00:04:18.394 CC lib/ftl/ftl_rq.o 00:04:18.394 CC lib/iscsi/task.o 00:04:18.653 CC lib/ftl/ftl_reloc.o 00:04:18.653 CC lib/ftl/ftl_l2p_cache.o 00:04:18.653 CC lib/ftl/ftl_p2l.o 00:04:18.653 CC lib/ftl/ftl_p2l_log.o 00:04:18.914 CC lib/ftl/mngt/ftl_mngt.o 00:04:18.914 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:18.914 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:18.914 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:18.914 LIB libspdk_vhost.a 00:04:18.914 SO libspdk_vhost.so.8.0 00:04:18.914 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:18.914 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:18.914 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:18.914 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:19.174 SYMLINK libspdk_vhost.so 00:04:19.174 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:19.174 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:19.174 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:19.174 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:19.174 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:19.174 CC lib/ftl/utils/ftl_conf.o 00:04:19.174 CC lib/ftl/utils/ftl_md.o 00:04:19.174 CC lib/ftl/utils/ftl_mempool.o 00:04:19.434 CC lib/ftl/utils/ftl_bitmap.o 00:04:19.434 CC lib/ftl/utils/ftl_property.o 00:04:19.434 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:19.434 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:19.434 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:19.434 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:19.434 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:19.434 LIB libspdk_iscsi.a 00:04:19.434 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:19.434 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:19.695 SO libspdk_iscsi.so.8.0 00:04:19.695 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:19.695 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:19.695 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:19.695 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:19.695 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:19.695 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:19.695 CC lib/ftl/base/ftl_base_dev.o 00:04:19.695 CC lib/ftl/base/ftl_base_bdev.o 00:04:19.695 SYMLINK libspdk_iscsi.so 00:04:19.695 CC lib/ftl/ftl_trace.o 00:04:19.977 LIB libspdk_nvmf.a 00:04:19.977 SO libspdk_nvmf.so.19.0 00:04:19.977 LIB libspdk_ftl.a 00:04:20.288 SYMLINK libspdk_nvmf.so 00:04:20.289 SO libspdk_ftl.so.9.0 00:04:20.549 SYMLINK libspdk_ftl.so 00:04:20.808 CC module/env_dpdk/env_dpdk_rpc.o 00:04:20.808 CC module/sock/posix/posix.o 00:04:20.808 CC module/accel/error/accel_error.o 00:04:20.808 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:20.808 CC module/keyring/linux/keyring.o 00:04:20.808 CC module/blob/bdev/blob_bdev.o 00:04:20.809 CC module/fsdev/aio/fsdev_aio.o 00:04:20.809 CC module/keyring/file/keyring.o 00:04:20.809 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:20.809 CC module/accel/ioat/accel_ioat.o 00:04:21.068 LIB libspdk_env_dpdk_rpc.a 00:04:21.068 SO libspdk_env_dpdk_rpc.so.6.0 00:04:21.068 SYMLINK libspdk_env_dpdk_rpc.so 00:04:21.068 CC module/keyring/file/keyring_rpc.o 00:04:21.068 CC module/keyring/linux/keyring_rpc.o 00:04:21.068 LIB libspdk_scheduler_dpdk_governor.a 00:04:21.068 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:21.068 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:21.068 CC module/accel/error/accel_error_rpc.o 00:04:21.068 LIB libspdk_scheduler_dynamic.a 00:04:21.068 CC module/accel/ioat/accel_ioat_rpc.o 00:04:21.068 SO libspdk_scheduler_dynamic.so.4.0 00:04:21.068 LIB libspdk_keyring_file.a 00:04:21.068 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:21.068 CC module/fsdev/aio/linux_aio_mgr.o 00:04:21.068 SYMLINK libspdk_scheduler_dynamic.so 00:04:21.068 SO libspdk_keyring_file.so.2.0 00:04:21.068 LIB libspdk_blob_bdev.a 00:04:21.068 LIB libspdk_keyring_linux.a 00:04:21.068 SO libspdk_blob_bdev.so.11.0 00:04:21.328 SO libspdk_keyring_linux.so.1.0 00:04:21.328 LIB libspdk_accel_error.a 00:04:21.328 SYMLINK libspdk_keyring_file.so 00:04:21.328 LIB libspdk_accel_ioat.a 00:04:21.328 SO libspdk_accel_ioat.so.6.0 00:04:21.328 SO libspdk_accel_error.so.2.0 00:04:21.328 SYMLINK libspdk_blob_bdev.so 00:04:21.328 SYMLINK libspdk_keyring_linux.so 00:04:21.328 SYMLINK libspdk_accel_ioat.so 00:04:21.328 CC module/scheduler/gscheduler/gscheduler.o 00:04:21.328 SYMLINK libspdk_accel_error.so 00:04:21.328 CC module/accel/dsa/accel_dsa.o 00:04:21.328 CC module/accel/iaa/accel_iaa.o 00:04:21.328 LIB libspdk_scheduler_gscheduler.a 00:04:21.588 SO libspdk_scheduler_gscheduler.so.4.0 00:04:21.588 CC module/bdev/gpt/gpt.o 00:04:21.588 CC module/bdev/error/vbdev_error.o 00:04:21.588 CC module/bdev/delay/vbdev_delay.o 00:04:21.588 CC module/blobfs/bdev/blobfs_bdev.o 00:04:21.588 CC module/bdev/lvol/vbdev_lvol.o 00:04:21.588 SYMLINK libspdk_scheduler_gscheduler.so 00:04:21.588 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:21.588 CC module/accel/iaa/accel_iaa_rpc.o 00:04:21.588 LIB libspdk_fsdev_aio.a 00:04:21.588 SO libspdk_fsdev_aio.so.1.0 00:04:21.589 CC module/accel/dsa/accel_dsa_rpc.o 00:04:21.589 CC module/bdev/gpt/vbdev_gpt.o 00:04:21.589 SYMLINK libspdk_fsdev_aio.so 00:04:21.589 LIB libspdk_accel_iaa.a 00:04:21.589 LIB libspdk_blobfs_bdev.a 00:04:21.589 CC module/bdev/error/vbdev_error_rpc.o 00:04:21.849 SO libspdk_accel_iaa.so.3.0 00:04:21.849 SO libspdk_blobfs_bdev.so.6.0 00:04:21.849 LIB libspdk_sock_posix.a 00:04:21.849 LIB libspdk_accel_dsa.a 00:04:21.849 SO libspdk_sock_posix.so.6.0 00:04:21.849 SYMLINK libspdk_blobfs_bdev.so 00:04:21.849 SYMLINK libspdk_accel_iaa.so 00:04:21.849 CC module/bdev/malloc/bdev_malloc.o 00:04:21.849 SO libspdk_accel_dsa.so.5.0 00:04:21.849 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:21.849 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:21.849 CC module/bdev/null/bdev_null.o 00:04:21.849 CC module/bdev/null/bdev_null_rpc.o 00:04:21.849 SYMLINK libspdk_accel_dsa.so 00:04:21.849 SYMLINK libspdk_sock_posix.so 00:04:21.849 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:21.849 LIB libspdk_bdev_error.a 00:04:21.849 SO libspdk_bdev_error.so.6.0 00:04:21.849 LIB libspdk_bdev_gpt.a 00:04:21.849 SO libspdk_bdev_gpt.so.6.0 00:04:21.849 SYMLINK libspdk_bdev_error.so 00:04:21.849 LIB libspdk_bdev_delay.a 00:04:22.109 SYMLINK libspdk_bdev_gpt.so 00:04:22.109 SO libspdk_bdev_delay.so.6.0 00:04:22.109 CC module/bdev/nvme/bdev_nvme.o 00:04:22.109 SYMLINK libspdk_bdev_delay.so 00:04:22.109 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:22.109 LIB libspdk_bdev_null.a 00:04:22.109 CC module/bdev/raid/bdev_raid.o 00:04:22.109 CC module/bdev/passthru/vbdev_passthru.o 00:04:22.109 CC module/bdev/split/vbdev_split.o 00:04:22.109 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:22.109 SO libspdk_bdev_null.so.6.0 00:04:22.109 CC module/bdev/aio/bdev_aio.o 00:04:22.109 LIB libspdk_bdev_lvol.a 00:04:22.109 SYMLINK libspdk_bdev_null.so 00:04:22.109 CC module/bdev/aio/bdev_aio_rpc.o 00:04:22.109 SO libspdk_bdev_lvol.so.6.0 00:04:22.109 LIB libspdk_bdev_malloc.a 00:04:22.368 SO libspdk_bdev_malloc.so.6.0 00:04:22.368 SYMLINK libspdk_bdev_lvol.so 00:04:22.368 CC module/bdev/nvme/nvme_rpc.o 00:04:22.368 SYMLINK libspdk_bdev_malloc.so 00:04:22.368 CC module/bdev/raid/bdev_raid_rpc.o 00:04:22.369 CC module/bdev/split/vbdev_split_rpc.o 00:04:22.369 CC module/bdev/raid/bdev_raid_sb.o 00:04:22.369 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:22.369 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:22.369 LIB libspdk_bdev_split.a 00:04:22.369 SO libspdk_bdev_split.so.6.0 00:04:22.629 SYMLINK libspdk_bdev_split.so 00:04:22.629 CC module/bdev/raid/raid0.o 00:04:22.629 CC module/bdev/raid/raid1.o 00:04:22.629 LIB libspdk_bdev_zone_block.a 00:04:22.629 LIB libspdk_bdev_passthru.a 00:04:22.629 SO libspdk_bdev_zone_block.so.6.0 00:04:22.629 LIB libspdk_bdev_aio.a 00:04:22.629 SO libspdk_bdev_passthru.so.6.0 00:04:22.629 SO libspdk_bdev_aio.so.6.0 00:04:22.629 CC module/bdev/raid/concat.o 00:04:22.629 SYMLINK libspdk_bdev_zone_block.so 00:04:22.629 CC module/bdev/raid/raid5f.o 00:04:22.629 SYMLINK libspdk_bdev_passthru.so 00:04:22.629 CC module/bdev/nvme/bdev_mdns_client.o 00:04:22.629 SYMLINK libspdk_bdev_aio.so 00:04:22.629 CC module/bdev/ftl/bdev_ftl.o 00:04:22.890 CC module/bdev/nvme/vbdev_opal.o 00:04:22.890 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:22.890 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:22.890 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:22.890 CC module/bdev/iscsi/bdev_iscsi.o 00:04:22.890 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:22.890 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:22.890 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:22.890 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:23.150 LIB libspdk_bdev_ftl.a 00:04:23.150 SO libspdk_bdev_ftl.so.6.0 00:04:23.150 SYMLINK libspdk_bdev_ftl.so 00:04:23.150 LIB libspdk_bdev_raid.a 00:04:23.150 LIB libspdk_bdev_iscsi.a 00:04:23.411 SO libspdk_bdev_raid.so.6.0 00:04:23.411 SO libspdk_bdev_iscsi.so.6.0 00:04:23.411 SYMLINK libspdk_bdev_iscsi.so 00:04:23.411 SYMLINK libspdk_bdev_raid.so 00:04:23.411 LIB libspdk_bdev_virtio.a 00:04:23.411 SO libspdk_bdev_virtio.so.6.0 00:04:23.671 SYMLINK libspdk_bdev_virtio.so 00:04:24.613 LIB libspdk_bdev_nvme.a 00:04:24.613 SO libspdk_bdev_nvme.so.7.0 00:04:24.613 SYMLINK libspdk_bdev_nvme.so 00:04:25.183 CC module/event/subsystems/sock/sock.o 00:04:25.183 CC module/event/subsystems/fsdev/fsdev.o 00:04:25.183 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:25.183 CC module/event/subsystems/scheduler/scheduler.o 00:04:25.183 CC module/event/subsystems/vmd/vmd.o 00:04:25.183 CC module/event/subsystems/keyring/keyring.o 00:04:25.183 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:25.183 CC module/event/subsystems/iobuf/iobuf.o 00:04:25.183 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:25.443 LIB libspdk_event_keyring.a 00:04:25.443 LIB libspdk_event_vhost_blk.a 00:04:25.443 LIB libspdk_event_fsdev.a 00:04:25.443 LIB libspdk_event_sock.a 00:04:25.443 LIB libspdk_event_vmd.a 00:04:25.443 LIB libspdk_event_scheduler.a 00:04:25.443 SO libspdk_event_keyring.so.1.0 00:04:25.443 SO libspdk_event_vhost_blk.so.3.0 00:04:25.443 SO libspdk_event_fsdev.so.1.0 00:04:25.443 SO libspdk_event_sock.so.5.0 00:04:25.443 SO libspdk_event_vmd.so.6.0 00:04:25.443 LIB libspdk_event_iobuf.a 00:04:25.443 SO libspdk_event_scheduler.so.4.0 00:04:25.443 SYMLINK libspdk_event_vhost_blk.so 00:04:25.443 SYMLINK libspdk_event_keyring.so 00:04:25.443 SO libspdk_event_iobuf.so.3.0 00:04:25.443 SYMLINK libspdk_event_fsdev.so 00:04:25.443 SYMLINK libspdk_event_sock.so 00:04:25.443 SYMLINK libspdk_event_vmd.so 00:04:25.443 SYMLINK libspdk_event_scheduler.so 00:04:25.443 SYMLINK libspdk_event_iobuf.so 00:04:26.014 CC module/event/subsystems/accel/accel.o 00:04:26.014 LIB libspdk_event_accel.a 00:04:26.014 SO libspdk_event_accel.so.6.0 00:04:26.275 SYMLINK libspdk_event_accel.so 00:04:26.535 CC module/event/subsystems/bdev/bdev.o 00:04:26.795 LIB libspdk_event_bdev.a 00:04:26.795 SO libspdk_event_bdev.so.6.0 00:04:26.795 SYMLINK libspdk_event_bdev.so 00:04:27.365 CC module/event/subsystems/nbd/nbd.o 00:04:27.365 CC module/event/subsystems/ublk/ublk.o 00:04:27.365 CC module/event/subsystems/scsi/scsi.o 00:04:27.365 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:27.365 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:27.365 LIB libspdk_event_ublk.a 00:04:27.365 LIB libspdk_event_nbd.a 00:04:27.365 LIB libspdk_event_scsi.a 00:04:27.365 SO libspdk_event_nbd.so.6.0 00:04:27.365 SO libspdk_event_ublk.so.3.0 00:04:27.365 SO libspdk_event_scsi.so.6.0 00:04:27.365 LIB libspdk_event_nvmf.a 00:04:27.365 SYMLINK libspdk_event_nbd.so 00:04:27.365 SYMLINK libspdk_event_ublk.so 00:04:27.625 SYMLINK libspdk_event_scsi.so 00:04:27.625 SO libspdk_event_nvmf.so.6.0 00:04:27.625 SYMLINK libspdk_event_nvmf.so 00:04:27.884 CC module/event/subsystems/iscsi/iscsi.o 00:04:27.884 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:28.144 LIB libspdk_event_vhost_scsi.a 00:04:28.144 LIB libspdk_event_iscsi.a 00:04:28.144 SO libspdk_event_vhost_scsi.so.3.0 00:04:28.144 SO libspdk_event_iscsi.so.6.0 00:04:28.144 SYMLINK libspdk_event_vhost_scsi.so 00:04:28.144 SYMLINK libspdk_event_iscsi.so 00:04:28.404 SO libspdk.so.6.0 00:04:28.404 SYMLINK libspdk.so 00:04:28.665 CXX app/trace/trace.o 00:04:28.665 CC app/trace_record/trace_record.o 00:04:28.665 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:28.665 CC app/nvmf_tgt/nvmf_main.o 00:04:28.665 CC app/iscsi_tgt/iscsi_tgt.o 00:04:28.665 CC examples/util/zipf/zipf.o 00:04:28.665 CC test/thread/poller_perf/poller_perf.o 00:04:28.665 CC examples/ioat/perf/perf.o 00:04:28.665 CC test/dma/test_dma/test_dma.o 00:04:28.926 CC test/app/bdev_svc/bdev_svc.o 00:04:28.926 LINK interrupt_tgt 00:04:28.926 LINK zipf 00:04:28.926 LINK iscsi_tgt 00:04:28.926 LINK poller_perf 00:04:28.926 LINK nvmf_tgt 00:04:28.926 LINK spdk_trace_record 00:04:28.926 LINK bdev_svc 00:04:28.926 LINK ioat_perf 00:04:29.186 LINK spdk_trace 00:04:29.186 CC examples/ioat/verify/verify.o 00:04:29.186 CC app/spdk_lspci/spdk_lspci.o 00:04:29.186 CC app/spdk_tgt/spdk_tgt.o 00:04:29.186 CC app/spdk_nvme_perf/perf.o 00:04:29.186 LINK test_dma 00:04:29.186 CC test/app/histogram_perf/histogram_perf.o 00:04:29.446 CC examples/sock/hello_world/hello_sock.o 00:04:29.446 CC examples/thread/thread/thread_ex.o 00:04:29.446 LINK spdk_lspci 00:04:29.446 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:29.446 LINK verify 00:04:29.446 CC test/app/jsoncat/jsoncat.o 00:04:29.446 LINK spdk_tgt 00:04:29.446 LINK histogram_perf 00:04:29.446 LINK jsoncat 00:04:29.447 TEST_HEADER include/spdk/accel.h 00:04:29.447 TEST_HEADER include/spdk/accel_module.h 00:04:29.447 TEST_HEADER include/spdk/assert.h 00:04:29.447 TEST_HEADER include/spdk/barrier.h 00:04:29.447 TEST_HEADER include/spdk/base64.h 00:04:29.447 TEST_HEADER include/spdk/bdev.h 00:04:29.447 TEST_HEADER include/spdk/bdev_module.h 00:04:29.447 TEST_HEADER include/spdk/bdev_zone.h 00:04:29.447 TEST_HEADER include/spdk/bit_array.h 00:04:29.447 TEST_HEADER include/spdk/bit_pool.h 00:04:29.447 TEST_HEADER include/spdk/blob_bdev.h 00:04:29.447 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:29.447 TEST_HEADER include/spdk/blobfs.h 00:04:29.447 LINK thread 00:04:29.707 TEST_HEADER include/spdk/blob.h 00:04:29.707 TEST_HEADER include/spdk/conf.h 00:04:29.707 TEST_HEADER include/spdk/config.h 00:04:29.707 TEST_HEADER include/spdk/cpuset.h 00:04:29.707 TEST_HEADER include/spdk/crc16.h 00:04:29.707 LINK hello_sock 00:04:29.707 TEST_HEADER include/spdk/crc32.h 00:04:29.707 CC app/spdk_nvme_identify/identify.o 00:04:29.707 TEST_HEADER include/spdk/crc64.h 00:04:29.707 TEST_HEADER include/spdk/dif.h 00:04:29.707 TEST_HEADER include/spdk/dma.h 00:04:29.707 TEST_HEADER include/spdk/endian.h 00:04:29.707 TEST_HEADER include/spdk/env_dpdk.h 00:04:29.707 TEST_HEADER include/spdk/env.h 00:04:29.707 TEST_HEADER include/spdk/event.h 00:04:29.707 TEST_HEADER include/spdk/fd_group.h 00:04:29.707 TEST_HEADER include/spdk/fd.h 00:04:29.707 TEST_HEADER include/spdk/file.h 00:04:29.707 CC app/spdk_nvme_discover/discovery_aer.o 00:04:29.707 TEST_HEADER include/spdk/fsdev.h 00:04:29.707 TEST_HEADER include/spdk/fsdev_module.h 00:04:29.707 TEST_HEADER include/spdk/ftl.h 00:04:29.707 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:29.707 TEST_HEADER include/spdk/gpt_spec.h 00:04:29.707 TEST_HEADER include/spdk/hexlify.h 00:04:29.707 TEST_HEADER include/spdk/histogram_data.h 00:04:29.707 TEST_HEADER include/spdk/idxd.h 00:04:29.707 TEST_HEADER include/spdk/idxd_spec.h 00:04:29.707 TEST_HEADER include/spdk/init.h 00:04:29.707 TEST_HEADER include/spdk/ioat.h 00:04:29.707 TEST_HEADER include/spdk/ioat_spec.h 00:04:29.707 TEST_HEADER include/spdk/iscsi_spec.h 00:04:29.707 TEST_HEADER include/spdk/json.h 00:04:29.707 TEST_HEADER include/spdk/jsonrpc.h 00:04:29.707 TEST_HEADER include/spdk/keyring.h 00:04:29.707 TEST_HEADER include/spdk/keyring_module.h 00:04:29.707 TEST_HEADER include/spdk/likely.h 00:04:29.707 TEST_HEADER include/spdk/log.h 00:04:29.707 TEST_HEADER include/spdk/lvol.h 00:04:29.707 TEST_HEADER include/spdk/md5.h 00:04:29.707 TEST_HEADER include/spdk/memory.h 00:04:29.707 TEST_HEADER include/spdk/mmio.h 00:04:29.707 TEST_HEADER include/spdk/nbd.h 00:04:29.707 TEST_HEADER include/spdk/net.h 00:04:29.707 TEST_HEADER include/spdk/notify.h 00:04:29.707 TEST_HEADER include/spdk/nvme.h 00:04:29.707 TEST_HEADER include/spdk/nvme_intel.h 00:04:29.707 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:29.707 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:29.707 TEST_HEADER include/spdk/nvme_spec.h 00:04:29.707 TEST_HEADER include/spdk/nvme_zns.h 00:04:29.707 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:29.707 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:29.707 TEST_HEADER include/spdk/nvmf.h 00:04:29.707 TEST_HEADER include/spdk/nvmf_spec.h 00:04:29.708 TEST_HEADER include/spdk/nvmf_transport.h 00:04:29.708 TEST_HEADER include/spdk/opal.h 00:04:29.708 TEST_HEADER include/spdk/opal_spec.h 00:04:29.708 TEST_HEADER include/spdk/pci_ids.h 00:04:29.708 TEST_HEADER include/spdk/pipe.h 00:04:29.708 TEST_HEADER include/spdk/queue.h 00:04:29.708 TEST_HEADER include/spdk/reduce.h 00:04:29.708 TEST_HEADER include/spdk/rpc.h 00:04:29.708 CC app/spdk_top/spdk_top.o 00:04:29.708 TEST_HEADER include/spdk/scheduler.h 00:04:29.708 TEST_HEADER include/spdk/scsi.h 00:04:29.708 TEST_HEADER include/spdk/scsi_spec.h 00:04:29.708 TEST_HEADER include/spdk/sock.h 00:04:29.708 TEST_HEADER include/spdk/stdinc.h 00:04:29.708 TEST_HEADER include/spdk/string.h 00:04:29.708 TEST_HEADER include/spdk/thread.h 00:04:29.708 TEST_HEADER include/spdk/trace.h 00:04:29.708 TEST_HEADER include/spdk/trace_parser.h 00:04:29.708 TEST_HEADER include/spdk/tree.h 00:04:29.708 TEST_HEADER include/spdk/ublk.h 00:04:29.708 TEST_HEADER include/spdk/util.h 00:04:29.708 TEST_HEADER include/spdk/uuid.h 00:04:29.708 TEST_HEADER include/spdk/version.h 00:04:29.708 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:29.708 CC app/vhost/vhost.o 00:04:29.708 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:29.708 TEST_HEADER include/spdk/vhost.h 00:04:29.708 TEST_HEADER include/spdk/vmd.h 00:04:29.708 TEST_HEADER include/spdk/xor.h 00:04:29.708 TEST_HEADER include/spdk/zipf.h 00:04:29.708 CXX test/cpp_headers/accel.o 00:04:29.708 LINK spdk_nvme_discover 00:04:29.708 LINK nvme_fuzz 00:04:29.968 CC app/spdk_dd/spdk_dd.o 00:04:29.968 LINK vhost 00:04:29.968 CC app/fio/nvme/fio_plugin.o 00:04:29.968 CXX test/cpp_headers/accel_module.o 00:04:29.968 CXX test/cpp_headers/assert.o 00:04:29.968 CC examples/vmd/lsvmd/lsvmd.o 00:04:30.228 CXX test/cpp_headers/barrier.o 00:04:30.228 LINK lsvmd 00:04:30.228 CXX test/cpp_headers/base64.o 00:04:30.228 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:30.228 LINK spdk_nvme_perf 00:04:30.228 LINK spdk_dd 00:04:30.228 CXX test/cpp_headers/bdev.o 00:04:30.228 CC examples/idxd/perf/perf.o 00:04:30.486 CC examples/vmd/led/led.o 00:04:30.486 CXX test/cpp_headers/bdev_module.o 00:04:30.486 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:30.486 CC test/app/stub/stub.o 00:04:30.486 LINK led 00:04:30.487 LINK spdk_nvme_identify 00:04:30.487 LINK spdk_nvme 00:04:30.487 CC test/env/mem_callbacks/mem_callbacks.o 00:04:30.487 LINK idxd_perf 00:04:30.487 CXX test/cpp_headers/bdev_zone.o 00:04:30.487 LINK spdk_top 00:04:30.746 LINK stub 00:04:30.746 LINK hello_fsdev 00:04:30.746 CXX test/cpp_headers/bit_array.o 00:04:30.746 CXX test/cpp_headers/bit_pool.o 00:04:30.746 CXX test/cpp_headers/blob_bdev.o 00:04:30.746 CC app/fio/bdev/fio_plugin.o 00:04:30.746 CXX test/cpp_headers/blobfs_bdev.o 00:04:30.746 CXX test/cpp_headers/blobfs.o 00:04:30.746 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:30.746 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:30.746 CXX test/cpp_headers/blob.o 00:04:31.005 CXX test/cpp_headers/conf.o 00:04:31.005 CC test/env/vtophys/vtophys.o 00:04:31.005 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:31.005 CC examples/accel/perf/accel_perf.o 00:04:31.005 CC examples/nvme/hello_world/hello_world.o 00:04:31.005 LINK mem_callbacks 00:04:31.005 CC examples/blob/hello_world/hello_blob.o 00:04:31.265 LINK vtophys 00:04:31.265 LINK env_dpdk_post_init 00:04:31.265 CXX test/cpp_headers/config.o 00:04:31.265 CXX test/cpp_headers/cpuset.o 00:04:31.265 LINK hello_world 00:04:31.265 LINK vhost_fuzz 00:04:31.265 LINK spdk_bdev 00:04:31.265 CC examples/blob/cli/blobcli.o 00:04:31.265 LINK hello_blob 00:04:31.265 CXX test/cpp_headers/crc16.o 00:04:31.265 CC test/env/memory/memory_ut.o 00:04:31.523 CC test/env/pci/pci_ut.o 00:04:31.523 CXX test/cpp_headers/crc32.o 00:04:31.523 CC examples/nvme/reconnect/reconnect.o 00:04:31.523 CXX test/cpp_headers/crc64.o 00:04:31.523 CC test/event/event_perf/event_perf.o 00:04:31.523 LINK accel_perf 00:04:31.523 CC test/nvme/aer/aer.o 00:04:31.782 CC test/nvme/reset/reset.o 00:04:31.782 LINK event_perf 00:04:31.782 CXX test/cpp_headers/dif.o 00:04:31.782 CXX test/cpp_headers/dma.o 00:04:31.782 LINK blobcli 00:04:31.782 LINK pci_ut 00:04:31.782 CC test/event/reactor/reactor.o 00:04:31.782 LINK reconnect 00:04:32.042 LINK aer 00:04:32.042 CXX test/cpp_headers/endian.o 00:04:32.042 CXX test/cpp_headers/env_dpdk.o 00:04:32.042 LINK reset 00:04:32.042 LINK iscsi_fuzz 00:04:32.042 LINK reactor 00:04:32.042 CXX test/cpp_headers/env.o 00:04:32.042 CC examples/bdev/hello_world/hello_bdev.o 00:04:32.042 CXX test/cpp_headers/event.o 00:04:32.042 CC test/event/reactor_perf/reactor_perf.o 00:04:32.303 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:32.303 CC test/nvme/sgl/sgl.o 00:04:32.303 CC test/nvme/e2edp/nvme_dp.o 00:04:32.303 LINK reactor_perf 00:04:32.303 CC test/event/app_repeat/app_repeat.o 00:04:32.303 CXX test/cpp_headers/fd_group.o 00:04:32.303 LINK hello_bdev 00:04:32.303 CC test/nvme/overhead/overhead.o 00:04:32.303 CC test/event/scheduler/scheduler.o 00:04:32.563 LINK memory_ut 00:04:32.563 LINK app_repeat 00:04:32.563 CXX test/cpp_headers/fd.o 00:04:32.563 LINK sgl 00:04:32.563 LINK nvme_dp 00:04:32.563 CC test/nvme/err_injection/err_injection.o 00:04:32.563 LINK scheduler 00:04:32.563 CC examples/bdev/bdevperf/bdevperf.o 00:04:32.563 CXX test/cpp_headers/file.o 00:04:32.563 CXX test/cpp_headers/fsdev.o 00:04:32.563 CXX test/cpp_headers/fsdev_module.o 00:04:32.563 LINK overhead 00:04:32.563 LINK nvme_manage 00:04:32.823 LINK err_injection 00:04:32.823 CXX test/cpp_headers/ftl.o 00:04:32.823 CC test/nvme/startup/startup.o 00:04:32.823 CC test/nvme/reserve/reserve.o 00:04:32.823 CC test/nvme/simple_copy/simple_copy.o 00:04:32.823 CC test/nvme/connect_stress/connect_stress.o 00:04:32.823 CC test/nvme/boot_partition/boot_partition.o 00:04:32.823 CC test/nvme/compliance/nvme_compliance.o 00:04:33.084 LINK startup 00:04:33.084 CC examples/nvme/arbitration/arbitration.o 00:04:33.084 CXX test/cpp_headers/fuse_dispatcher.o 00:04:33.084 CC test/nvme/fused_ordering/fused_ordering.o 00:04:33.084 LINK reserve 00:04:33.084 LINK boot_partition 00:04:33.084 LINK connect_stress 00:04:33.084 LINK simple_copy 00:04:33.084 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:33.084 CXX test/cpp_headers/gpt_spec.o 00:04:33.382 LINK fused_ordering 00:04:33.382 CC test/nvme/fdp/fdp.o 00:04:33.382 LINK nvme_compliance 00:04:33.382 CC test/nvme/cuse/cuse.o 00:04:33.382 LINK arbitration 00:04:33.382 CC examples/nvme/hotplug/hotplug.o 00:04:33.382 CXX test/cpp_headers/hexlify.o 00:04:33.382 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:33.382 LINK doorbell_aers 00:04:33.382 LINK bdevperf 00:04:33.382 CXX test/cpp_headers/histogram_data.o 00:04:33.644 CC test/rpc_client/rpc_client_test.o 00:04:33.644 LINK cmb_copy 00:04:33.644 LINK hotplug 00:04:33.644 CC examples/nvme/abort/abort.o 00:04:33.644 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:33.644 CXX test/cpp_headers/idxd.o 00:04:33.644 CC test/accel/dif/dif.o 00:04:33.644 LINK fdp 00:04:33.644 CXX test/cpp_headers/idxd_spec.o 00:04:33.644 LINK rpc_client_test 00:04:33.644 LINK pmr_persistence 00:04:33.904 CXX test/cpp_headers/init.o 00:04:33.904 CXX test/cpp_headers/ioat.o 00:04:33.904 CXX test/cpp_headers/ioat_spec.o 00:04:33.904 CXX test/cpp_headers/iscsi_spec.o 00:04:33.904 CC test/blobfs/mkfs/mkfs.o 00:04:33.904 CXX test/cpp_headers/json.o 00:04:33.904 CXX test/cpp_headers/jsonrpc.o 00:04:33.904 CC test/lvol/esnap/esnap.o 00:04:33.904 LINK abort 00:04:33.904 CXX test/cpp_headers/keyring.o 00:04:33.904 CXX test/cpp_headers/keyring_module.o 00:04:33.904 CXX test/cpp_headers/likely.o 00:04:33.904 LINK mkfs 00:04:34.164 CXX test/cpp_headers/log.o 00:04:34.164 CXX test/cpp_headers/lvol.o 00:04:34.164 CXX test/cpp_headers/md5.o 00:04:34.164 CXX test/cpp_headers/memory.o 00:04:34.164 CXX test/cpp_headers/mmio.o 00:04:34.164 CXX test/cpp_headers/nbd.o 00:04:34.164 CXX test/cpp_headers/net.o 00:04:34.164 CXX test/cpp_headers/notify.o 00:04:34.164 CXX test/cpp_headers/nvme.o 00:04:34.164 CXX test/cpp_headers/nvme_intel.o 00:04:34.460 CXX test/cpp_headers/nvme_ocssd.o 00:04:34.460 CC examples/nvmf/nvmf/nvmf.o 00:04:34.460 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:34.460 CXX test/cpp_headers/nvme_spec.o 00:04:34.460 LINK dif 00:04:34.460 CXX test/cpp_headers/nvme_zns.o 00:04:34.460 CXX test/cpp_headers/nvmf_cmd.o 00:04:34.460 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:34.460 CXX test/cpp_headers/nvmf.o 00:04:34.460 CXX test/cpp_headers/nvmf_spec.o 00:04:34.460 CXX test/cpp_headers/nvmf_transport.o 00:04:34.460 CXX test/cpp_headers/opal.o 00:04:34.460 LINK cuse 00:04:34.460 CXX test/cpp_headers/opal_spec.o 00:04:34.460 CXX test/cpp_headers/pci_ids.o 00:04:34.724 LINK nvmf 00:04:34.724 CXX test/cpp_headers/pipe.o 00:04:34.724 CXX test/cpp_headers/queue.o 00:04:34.724 CXX test/cpp_headers/reduce.o 00:04:34.724 CXX test/cpp_headers/rpc.o 00:04:34.724 CXX test/cpp_headers/scheduler.o 00:04:34.724 CXX test/cpp_headers/scsi.o 00:04:34.724 CXX test/cpp_headers/scsi_spec.o 00:04:34.724 CXX test/cpp_headers/sock.o 00:04:34.724 CC test/bdev/bdevio/bdevio.o 00:04:34.724 CXX test/cpp_headers/stdinc.o 00:04:34.724 CXX test/cpp_headers/string.o 00:04:34.724 CXX test/cpp_headers/thread.o 00:04:34.984 CXX test/cpp_headers/trace.o 00:04:34.984 CXX test/cpp_headers/trace_parser.o 00:04:34.984 CXX test/cpp_headers/tree.o 00:04:34.984 CXX test/cpp_headers/ublk.o 00:04:34.984 CXX test/cpp_headers/util.o 00:04:34.984 CXX test/cpp_headers/uuid.o 00:04:34.984 CXX test/cpp_headers/version.o 00:04:34.984 CXX test/cpp_headers/vfio_user_pci.o 00:04:34.984 CXX test/cpp_headers/vfio_user_spec.o 00:04:34.984 CXX test/cpp_headers/vhost.o 00:04:34.984 CXX test/cpp_headers/vmd.o 00:04:34.984 CXX test/cpp_headers/xor.o 00:04:34.984 CXX test/cpp_headers/zipf.o 00:04:35.244 LINK bdevio 00:04:39.445 LINK esnap 00:04:39.705 00:04:39.705 real 1m15.857s 00:04:39.705 user 5m37.582s 00:04:39.705 sys 1m8.528s 00:04:39.705 12:47:57 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:39.705 12:47:57 make -- common/autotest_common.sh@10 -- $ set +x 00:04:39.705 ************************************ 00:04:39.705 END TEST make 00:04:39.705 ************************************ 00:04:39.965 12:47:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:39.965 12:47:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:39.965 12:47:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:39.965 12:47:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.965 12:47:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:39.965 12:47:57 -- pm/common@44 -- $ pid=6201 00:04:39.965 12:47:57 -- pm/common@50 -- $ kill -TERM 6201 00:04:39.965 12:47:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.965 12:47:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:39.965 12:47:57 -- pm/common@44 -- $ pid=6203 00:04:39.965 12:47:57 -- pm/common@50 -- $ kill -TERM 6203 00:04:39.965 12:47:57 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:39.965 12:47:57 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:39.965 12:47:57 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:39.965 12:47:57 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:39.965 12:47:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.965 12:47:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.965 12:47:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.965 12:47:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.965 12:47:57 -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.965 12:47:57 -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.965 12:47:57 -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.965 12:47:57 -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.965 12:47:57 -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.965 12:47:57 -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.965 12:47:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.966 12:47:57 -- scripts/common.sh@344 -- # case "$op" in 00:04:39.966 12:47:57 -- scripts/common.sh@345 -- # : 1 00:04:39.966 12:47:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.966 12:47:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.966 12:47:57 -- scripts/common.sh@365 -- # decimal 1 00:04:39.966 12:47:57 -- scripts/common.sh@353 -- # local d=1 00:04:39.966 12:47:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.966 12:47:57 -- scripts/common.sh@355 -- # echo 1 00:04:39.966 12:47:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.966 12:47:57 -- scripts/common.sh@366 -- # decimal 2 00:04:39.966 12:47:57 -- scripts/common.sh@353 -- # local d=2 00:04:39.966 12:47:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.966 12:47:57 -- scripts/common.sh@355 -- # echo 2 00:04:39.966 12:47:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.966 12:47:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.966 12:47:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.966 12:47:57 -- scripts/common.sh@368 -- # return 0 00:04:39.966 12:47:57 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.966 12:47:57 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:39.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.966 --rc genhtml_branch_coverage=1 00:04:39.966 --rc genhtml_function_coverage=1 00:04:39.966 --rc genhtml_legend=1 00:04:39.966 --rc geninfo_all_blocks=1 00:04:39.966 --rc geninfo_unexecuted_blocks=1 00:04:39.966 00:04:39.966 ' 00:04:39.966 12:47:57 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:39.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.966 --rc genhtml_branch_coverage=1 00:04:39.966 --rc genhtml_function_coverage=1 00:04:39.966 --rc genhtml_legend=1 00:04:39.966 --rc geninfo_all_blocks=1 00:04:39.966 --rc geninfo_unexecuted_blocks=1 00:04:39.966 00:04:39.966 ' 00:04:39.966 12:47:57 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:39.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.966 --rc genhtml_branch_coverage=1 00:04:39.966 --rc genhtml_function_coverage=1 00:04:39.966 --rc genhtml_legend=1 00:04:39.966 --rc geninfo_all_blocks=1 00:04:39.966 --rc geninfo_unexecuted_blocks=1 00:04:39.966 00:04:39.966 ' 00:04:39.966 12:47:57 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:39.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.966 --rc genhtml_branch_coverage=1 00:04:39.966 --rc genhtml_function_coverage=1 00:04:39.966 --rc genhtml_legend=1 00:04:39.966 --rc geninfo_all_blocks=1 00:04:39.966 --rc geninfo_unexecuted_blocks=1 00:04:39.966 00:04:39.966 ' 00:04:39.966 12:47:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:39.966 12:47:57 -- nvmf/common.sh@7 -- # uname -s 00:04:40.226 12:47:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.226 12:47:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.226 12:47:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.226 12:47:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.226 12:47:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.226 12:47:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.226 12:47:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.227 12:47:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.227 12:47:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.227 12:47:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.227 12:47:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ee4561f-c8c1-44d1-ac3c-57f4ce74092b 00:04:40.227 12:47:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=1ee4561f-c8c1-44d1-ac3c-57f4ce74092b 00:04:40.227 12:47:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.227 12:47:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.227 12:47:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.227 12:47:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.227 12:47:57 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.227 12:47:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:40.227 12:47:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.227 12:47:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.227 12:47:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.227 12:47:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.227 12:47:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.227 12:47:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.227 12:47:57 -- paths/export.sh@5 -- # export PATH 00:04:40.227 12:47:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.227 12:47:57 -- nvmf/common.sh@51 -- # : 0 00:04:40.227 12:47:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:40.227 12:47:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:40.227 12:47:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.227 12:47:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.227 12:47:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.227 12:47:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:40.227 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:40.227 12:47:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:40.227 12:47:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:40.227 12:47:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:40.227 12:47:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:40.227 12:47:57 -- spdk/autotest.sh@32 -- # uname -s 00:04:40.227 12:47:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:40.227 12:47:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:40.227 12:47:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:40.227 12:47:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:40.227 12:47:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:40.227 12:47:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:40.227 12:47:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:40.227 12:47:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:40.227 12:47:57 -- spdk/autotest.sh@48 -- # udevadm_pid=66860 00:04:40.227 12:47:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:40.227 12:47:57 -- pm/common@17 -- # local monitor 00:04:40.227 12:47:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:40.227 12:47:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.227 12:47:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.227 12:47:57 -- pm/common@25 -- # sleep 1 00:04:40.227 12:47:57 -- pm/common@21 -- # date +%s 00:04:40.227 12:47:57 -- pm/common@21 -- # date +%s 00:04:40.227 12:47:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732625277 00:04:40.227 12:47:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732625277 00:04:40.227 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732625277_collect-cpu-load.pm.log 00:04:40.227 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732625277_collect-vmstat.pm.log 00:04:41.168 12:47:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:41.168 12:47:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:41.168 12:47:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.168 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:04:41.168 12:47:58 -- spdk/autotest.sh@59 -- # create_test_list 00:04:41.168 12:47:58 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:41.168 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:04:41.168 12:47:58 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:41.168 12:47:58 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:41.168 12:47:58 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:41.168 12:47:58 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:41.168 12:47:58 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:41.168 12:47:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:41.427 12:47:58 -- common/autotest_common.sh@1455 -- # uname 00:04:41.427 12:47:58 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:41.427 12:47:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:41.427 12:47:58 -- common/autotest_common.sh@1475 -- # uname 00:04:41.427 12:47:58 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:41.427 12:47:58 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:41.427 12:47:58 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:41.427 lcov: LCOV version 1.15 00:04:41.427 12:47:58 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:56.321 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:56.321 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:11.215 12:48:26 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:11.215 12:48:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.215 12:48:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.215 12:48:26 -- spdk/autotest.sh@78 -- # rm -f 00:05:11.215 12:48:26 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.215 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:11.215 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:11.215 12:48:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:11.215 12:48:27 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:11.215 12:48:27 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:11.215 12:48:27 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:11.215 12:48:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:11.215 12:48:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:11.215 12:48:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:11.215 12:48:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.215 12:48:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:11.215 12:48:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:11.215 12:48:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:11.215 12:48:27 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:11.215 12:48:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:11.215 12:48:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:11.215 12:48:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:11.215 12:48:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:11.215 12:48:27 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:11.215 12:48:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:11.215 12:48:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:11.215 12:48:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:11.215 12:48:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:11.215 12:48:27 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:11.215 12:48:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:11.215 12:48:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:11.215 12:48:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:11.215 12:48:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.215 12:48:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.215 12:48:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:11.215 12:48:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:11.215 12:48:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:11.215 No valid GPT data, bailing 00:05:11.215 12:48:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.215 12:48:27 -- scripts/common.sh@394 -- # pt= 00:05:11.215 12:48:27 -- scripts/common.sh@395 -- # return 1 00:05:11.215 12:48:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:11.215 1+0 records in 00:05:11.215 1+0 records out 00:05:11.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684656 s, 153 MB/s 00:05:11.215 12:48:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.215 12:48:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.215 12:48:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:11.215 12:48:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:11.215 12:48:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:11.215 No valid GPT data, bailing 00:05:11.215 12:48:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:11.215 12:48:27 -- scripts/common.sh@394 -- # pt= 00:05:11.215 12:48:27 -- scripts/common.sh@395 -- # return 1 00:05:11.215 12:48:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:11.215 1+0 records in 00:05:11.215 1+0 records out 00:05:11.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0076473 s, 137 MB/s 00:05:11.215 12:48:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.215 12:48:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.215 12:48:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:11.215 12:48:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:11.215 12:48:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:11.215 No valid GPT data, bailing 00:05:11.215 12:48:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:11.215 12:48:27 -- scripts/common.sh@394 -- # pt= 00:05:11.215 12:48:27 -- scripts/common.sh@395 -- # return 1 00:05:11.215 12:48:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:11.215 1+0 records in 00:05:11.215 1+0 records out 00:05:11.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0069431 s, 151 MB/s 00:05:11.215 12:48:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.215 12:48:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.215 12:48:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:11.215 12:48:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:11.215 12:48:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:11.215 No valid GPT data, bailing 00:05:11.215 12:48:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:11.215 12:48:27 -- scripts/common.sh@394 -- # pt= 00:05:11.215 12:48:27 -- scripts/common.sh@395 -- # return 1 00:05:11.215 12:48:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:11.215 1+0 records in 00:05:11.215 1+0 records out 00:05:11.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00744742 s, 141 MB/s 00:05:11.215 12:48:27 -- spdk/autotest.sh@105 -- # sync 00:05:11.215 12:48:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:11.215 12:48:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:11.215 12:48:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:13.749 12:48:30 -- spdk/autotest.sh@111 -- # uname -s 00:05:13.749 12:48:30 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:13.749 12:48:30 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:13.749 12:48:30 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:14.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.008 Hugepages 00:05:14.008 node hugesize free / total 00:05:14.008 node0 1048576kB 0 / 0 00:05:14.008 node0 2048kB 0 / 0 00:05:14.008 00:05:14.008 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.267 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:14.267 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:14.526 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:14.526 12:48:32 -- spdk/autotest.sh@117 -- # uname -s 00:05:14.526 12:48:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:14.526 12:48:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:14.526 12:48:32 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.465 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.465 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.465 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.465 12:48:33 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:16.844 12:48:34 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:16.844 12:48:34 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:16.844 12:48:34 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:16.844 12:48:34 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:16.844 12:48:34 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:16.844 12:48:34 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:16.844 12:48:34 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.844 12:48:34 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:16.844 12:48:34 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:16.845 12:48:34 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:16.845 12:48:34 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:16.845 12:48:34 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.104 Waiting for block devices as requested 00:05:17.104 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.364 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.364 12:48:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:17.364 12:48:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:17.364 12:48:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:17.364 12:48:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:17.364 12:48:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:17.364 12:48:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:17.364 12:48:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:17.364 12:48:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:17.364 12:48:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:17.364 12:48:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:17.364 12:48:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:17.364 12:48:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:17.364 12:48:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:17.364 12:48:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:17.364 12:48:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:17.364 12:48:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:17.364 12:48:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:17.364 12:48:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:17.364 12:48:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:17.364 12:48:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:17.364 12:48:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:17.364 12:48:34 -- common/autotest_common.sh@1541 -- # continue 00:05:17.364 12:48:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:17.364 12:48:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:17.364 12:48:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:17.364 12:48:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:17.364 12:48:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:17.364 12:48:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:17.364 12:48:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:17.364 12:48:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:17.364 12:48:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:17.364 12:48:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:17.364 12:48:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:17.364 12:48:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:17.364 12:48:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:17.364 12:48:35 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:17.364 12:48:35 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:17.364 12:48:35 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:17.364 12:48:35 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:17.364 12:48:35 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:17.364 12:48:35 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:17.364 12:48:35 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:17.364 12:48:35 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:17.364 12:48:35 -- common/autotest_common.sh@1541 -- # continue 00:05:17.364 12:48:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:17.364 12:48:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:17.364 12:48:35 -- common/autotest_common.sh@10 -- # set +x 00:05:17.623 12:48:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:17.623 12:48:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.623 12:48:35 -- common/autotest_common.sh@10 -- # set +x 00:05:17.623 12:48:35 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.561 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.561 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.561 12:48:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:18.561 12:48:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.561 12:48:36 -- common/autotest_common.sh@10 -- # set +x 00:05:18.561 12:48:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:18.561 12:48:36 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:18.561 12:48:36 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:18.561 12:48:36 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:18.561 12:48:36 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:18.561 12:48:36 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:18.561 12:48:36 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:18.561 12:48:36 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:18.561 12:48:36 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:18.561 12:48:36 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:18.561 12:48:36 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.561 12:48:36 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:18.561 12:48:36 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:18.819 12:48:36 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:18.819 12:48:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:18.819 12:48:36 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:18.819 12:48:36 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:18.819 12:48:36 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:18.819 12:48:36 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:18.819 12:48:36 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:18.819 12:48:36 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:18.819 12:48:36 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:18.819 12:48:36 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:18.819 12:48:36 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:18.819 12:48:36 -- common/autotest_common.sh@1570 -- # return 0 00:05:18.819 12:48:36 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:18.819 12:48:36 -- common/autotest_common.sh@1578 -- # return 0 00:05:18.819 12:48:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:18.819 12:48:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:18.819 12:48:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:18.819 12:48:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:18.819 12:48:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:18.819 12:48:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.819 12:48:36 -- common/autotest_common.sh@10 -- # set +x 00:05:18.819 12:48:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:18.819 12:48:36 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:18.819 12:48:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.819 12:48:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.819 12:48:36 -- common/autotest_common.sh@10 -- # set +x 00:05:18.819 ************************************ 00:05:18.819 START TEST env 00:05:18.819 ************************************ 00:05:18.819 12:48:36 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:18.819 * Looking for test storage... 00:05:18.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:18.819 12:48:36 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.819 12:48:36 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.820 12:48:36 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.079 12:48:36 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.079 12:48:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.079 12:48:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.079 12:48:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.079 12:48:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.079 12:48:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.079 12:48:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.079 12:48:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.079 12:48:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.079 12:48:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.079 12:48:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.079 12:48:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.079 12:48:36 env -- scripts/common.sh@344 -- # case "$op" in 00:05:19.079 12:48:36 env -- scripts/common.sh@345 -- # : 1 00:05:19.079 12:48:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.079 12:48:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.079 12:48:36 env -- scripts/common.sh@365 -- # decimal 1 00:05:19.079 12:48:36 env -- scripts/common.sh@353 -- # local d=1 00:05:19.079 12:48:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.079 12:48:36 env -- scripts/common.sh@355 -- # echo 1 00:05:19.079 12:48:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.079 12:48:36 env -- scripts/common.sh@366 -- # decimal 2 00:05:19.079 12:48:36 env -- scripts/common.sh@353 -- # local d=2 00:05:19.079 12:48:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.079 12:48:36 env -- scripts/common.sh@355 -- # echo 2 00:05:19.079 12:48:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.079 12:48:36 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.079 12:48:36 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.079 12:48:36 env -- scripts/common.sh@368 -- # return 0 00:05:19.079 12:48:36 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.079 12:48:36 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.079 --rc genhtml_branch_coverage=1 00:05:19.079 --rc genhtml_function_coverage=1 00:05:19.079 --rc genhtml_legend=1 00:05:19.079 --rc geninfo_all_blocks=1 00:05:19.079 --rc geninfo_unexecuted_blocks=1 00:05:19.079 00:05:19.079 ' 00:05:19.079 12:48:36 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.079 --rc genhtml_branch_coverage=1 00:05:19.079 --rc genhtml_function_coverage=1 00:05:19.079 --rc genhtml_legend=1 00:05:19.079 --rc geninfo_all_blocks=1 00:05:19.079 --rc geninfo_unexecuted_blocks=1 00:05:19.079 00:05:19.079 ' 00:05:19.079 12:48:36 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.079 --rc genhtml_branch_coverage=1 00:05:19.079 --rc genhtml_function_coverage=1 00:05:19.079 --rc genhtml_legend=1 00:05:19.079 --rc geninfo_all_blocks=1 00:05:19.079 --rc geninfo_unexecuted_blocks=1 00:05:19.079 00:05:19.079 ' 00:05:19.079 12:48:36 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.079 --rc genhtml_branch_coverage=1 00:05:19.079 --rc genhtml_function_coverage=1 00:05:19.079 --rc genhtml_legend=1 00:05:19.079 --rc geninfo_all_blocks=1 00:05:19.079 --rc geninfo_unexecuted_blocks=1 00:05:19.079 00:05:19.079 ' 00:05:19.079 12:48:36 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:19.079 12:48:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.079 12:48:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.079 12:48:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.079 ************************************ 00:05:19.079 START TEST env_memory 00:05:19.079 ************************************ 00:05:19.079 12:48:36 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:19.079 00:05:19.079 00:05:19.079 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.079 http://cunit.sourceforge.net/ 00:05:19.079 00:05:19.079 00:05:19.079 Suite: memory 00:05:19.079 Test: alloc and free memory map ...[2024-11-26 12:48:36.632811] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:19.079 passed 00:05:19.079 Test: mem map translation ...[2024-11-26 12:48:36.676067] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:19.079 [2024-11-26 12:48:36.676138] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:19.079 [2024-11-26 12:48:36.676210] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:19.079 [2024-11-26 12:48:36.676228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:19.079 passed 00:05:19.079 Test: mem map registration ...[2024-11-26 12:48:36.741419] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:19.079 [2024-11-26 12:48:36.741480] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:19.338 passed 00:05:19.339 Test: mem map adjacent registrations ...passed 00:05:19.339 00:05:19.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.339 suites 1 1 n/a 0 0 00:05:19.339 tests 4 4 4 0 0 00:05:19.339 asserts 152 152 152 0 n/a 00:05:19.339 00:05:19.339 Elapsed time = 0.235 seconds 00:05:19.339 00:05:19.339 real 0m0.290s 00:05:19.339 user 0m0.251s 00:05:19.339 sys 0m0.027s 00:05:19.339 12:48:36 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.339 12:48:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:19.339 ************************************ 00:05:19.339 END TEST env_memory 00:05:19.339 ************************************ 00:05:19.339 12:48:36 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:19.339 12:48:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.339 12:48:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.339 12:48:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.339 ************************************ 00:05:19.339 START TEST env_vtophys 00:05:19.339 ************************************ 00:05:19.339 12:48:36 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:19.339 EAL: lib.eal log level changed from notice to debug 00:05:19.339 EAL: Detected lcore 0 as core 0 on socket 0 00:05:19.339 EAL: Detected lcore 1 as core 0 on socket 0 00:05:19.339 EAL: Detected lcore 2 as core 0 on socket 0 00:05:19.339 EAL: Detected lcore 3 as core 0 on socket 0 00:05:19.339 EAL: Detected lcore 4 as core 0 on socket 0 00:05:19.339 EAL: Detected lcore 5 as core 0 on socket 0 00:05:19.339 EAL: Detected lcore 6 as core 0 on socket 0 00:05:19.339 EAL: Detected lcore 7 as core 0 on socket 0 00:05:19.339 EAL: Detected lcore 8 as core 0 on socket 0 00:05:19.339 EAL: Detected lcore 9 as core 0 on socket 0 00:05:19.339 EAL: Maximum logical cores by configuration: 128 00:05:19.339 EAL: Detected CPU lcores: 10 00:05:19.339 EAL: Detected NUMA nodes: 1 00:05:19.339 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:19.339 EAL: Detected shared linkage of DPDK 00:05:19.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:19.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:19.339 EAL: Registered [vdev] bus. 00:05:19.339 EAL: bus.vdev log level changed from disabled to notice 00:05:19.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:19.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:19.339 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:19.339 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:19.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:19.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:19.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:19.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:19.339 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.339 EAL: No shared files mode enabled, IPC is disabled 00:05:19.339 EAL: Selected IOVA mode 'PA' 00:05:19.339 EAL: Probing VFIO support... 00:05:19.339 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:19.339 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:19.339 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.339 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.339 EAL: Setting up physically contiguous memory... 00:05:19.339 EAL: Setting maximum number of open files to 524288 00:05:19.339 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.339 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.339 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.339 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.339 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.339 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.339 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.339 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.339 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.339 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.339 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.339 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.339 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.339 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.339 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.339 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.339 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.339 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.339 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.339 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.339 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.339 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.339 EAL: Hugepages will be freed exactly as allocated. 00:05:19.339 EAL: No shared files mode enabled, IPC is disabled 00:05:19.339 EAL: No shared files mode enabled, IPC is disabled 00:05:19.597 EAL: TSC frequency is ~2290000 KHz 00:05:19.597 EAL: Main lcore 0 is ready (tid=7fae02a67a40;cpuset=[0]) 00:05:19.597 EAL: Trying to obtain current memory policy. 00:05:19.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.597 EAL: Restoring previous memory policy: 0 00:05:19.597 EAL: request: mp_malloc_sync 00:05:19.597 EAL: No shared files mode enabled, IPC is disabled 00:05:19.597 EAL: Heap on socket 0 was expanded by 2MB 00:05:19.597 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:19.597 EAL: No shared files mode enabled, IPC is disabled 00:05:19.597 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:19.597 EAL: Mem event callback 'spdk:(nil)' registered 00:05:19.597 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:19.597 00:05:19.597 00:05:19.597 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.597 http://cunit.sourceforge.net/ 00:05:19.597 00:05:19.597 00:05:19.597 Suite: components_suite 00:05:20.165 Test: vtophys_malloc_test ...passed 00:05:20.165 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:20.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.165 EAL: Restoring previous memory policy: 4 00:05:20.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.165 EAL: request: mp_malloc_sync 00:05:20.165 EAL: No shared files mode enabled, IPC is disabled 00:05:20.165 EAL: Heap on socket 0 was expanded by 4MB 00:05:20.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.165 EAL: request: mp_malloc_sync 00:05:20.165 EAL: No shared files mode enabled, IPC is disabled 00:05:20.165 EAL: Heap on socket 0 was shrunk by 4MB 00:05:20.165 EAL: Trying to obtain current memory policy. 00:05:20.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.165 EAL: Restoring previous memory policy: 4 00:05:20.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.165 EAL: request: mp_malloc_sync 00:05:20.165 EAL: No shared files mode enabled, IPC is disabled 00:05:20.165 EAL: Heap on socket 0 was expanded by 6MB 00:05:20.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.165 EAL: request: mp_malloc_sync 00:05:20.165 EAL: No shared files mode enabled, IPC is disabled 00:05:20.165 EAL: Heap on socket 0 was shrunk by 6MB 00:05:20.165 EAL: Trying to obtain current memory policy. 00:05:20.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.165 EAL: Restoring previous memory policy: 4 00:05:20.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.165 EAL: request: mp_malloc_sync 00:05:20.165 EAL: No shared files mode enabled, IPC is disabled 00:05:20.165 EAL: Heap on socket 0 was expanded by 10MB 00:05:20.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.165 EAL: request: mp_malloc_sync 00:05:20.165 EAL: No shared files mode enabled, IPC is disabled 00:05:20.165 EAL: Heap on socket 0 was shrunk by 10MB 00:05:20.165 EAL: Trying to obtain current memory policy. 00:05:20.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.165 EAL: Restoring previous memory policy: 4 00:05:20.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.165 EAL: request: mp_malloc_sync 00:05:20.165 EAL: No shared files mode enabled, IPC is disabled 00:05:20.165 EAL: Heap on socket 0 was expanded by 18MB 00:05:20.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.165 EAL: request: mp_malloc_sync 00:05:20.165 EAL: No shared files mode enabled, IPC is disabled 00:05:20.165 EAL: Heap on socket 0 was shrunk by 18MB 00:05:20.165 EAL: Trying to obtain current memory policy. 00:05:20.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.165 EAL: Restoring previous memory policy: 4 00:05:20.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.165 EAL: request: mp_malloc_sync 00:05:20.165 EAL: No shared files mode enabled, IPC is disabled 00:05:20.166 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.166 EAL: request: mp_malloc_sync 00:05:20.166 EAL: No shared files mode enabled, IPC is disabled 00:05:20.166 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.166 EAL: Trying to obtain current memory policy. 00:05:20.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.166 EAL: Restoring previous memory policy: 4 00:05:20.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.166 EAL: request: mp_malloc_sync 00:05:20.166 EAL: No shared files mode enabled, IPC is disabled 00:05:20.166 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.166 EAL: request: mp_malloc_sync 00:05:20.166 EAL: No shared files mode enabled, IPC is disabled 00:05:20.166 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.166 EAL: Trying to obtain current memory policy. 00:05:20.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.166 EAL: Restoring previous memory policy: 4 00:05:20.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.166 EAL: request: mp_malloc_sync 00:05:20.166 EAL: No shared files mode enabled, IPC is disabled 00:05:20.166 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.166 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.425 EAL: request: mp_malloc_sync 00:05:20.425 EAL: No shared files mode enabled, IPC is disabled 00:05:20.425 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.425 EAL: Trying to obtain current memory policy. 00:05:20.425 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.425 EAL: Restoring previous memory policy: 4 00:05:20.425 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.425 EAL: request: mp_malloc_sync 00:05:20.425 EAL: No shared files mode enabled, IPC is disabled 00:05:20.425 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.425 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.691 EAL: request: mp_malloc_sync 00:05:20.692 EAL: No shared files mode enabled, IPC is disabled 00:05:20.692 EAL: Heap on socket 0 was shrunk by 258MB 00:05:20.692 EAL: Trying to obtain current memory policy. 00:05:20.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.692 EAL: Restoring previous memory policy: 4 00:05:20.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.692 EAL: request: mp_malloc_sync 00:05:20.692 EAL: No shared files mode enabled, IPC is disabled 00:05:20.692 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.966 EAL: request: mp_malloc_sync 00:05:20.966 EAL: No shared files mode enabled, IPC is disabled 00:05:20.966 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.966 EAL: Trying to obtain current memory policy. 00:05:20.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.535 EAL: Restoring previous memory policy: 4 00:05:21.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.535 EAL: request: mp_malloc_sync 00:05:21.535 EAL: No shared files mode enabled, IPC is disabled 00:05:21.535 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.062 passed 00:05:22.062 00:05:22.062 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.062 suites 1 1 n/a 0 0 00:05:22.062 tests 2 2 2 0 0 00:05:22.062 asserts 5407 5407 5407 0 n/a 00:05:22.062 00:05:22.062 Elapsed time = 2.453 seconds 00:05:22.062 EAL: request: mp_malloc_sync 00:05:22.062 EAL: No shared files mode enabled, IPC is disabled 00:05:22.062 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:22.062 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.062 EAL: request: mp_malloc_sync 00:05:22.062 EAL: No shared files mode enabled, IPC is disabled 00:05:22.062 EAL: Heap on socket 0 was shrunk by 2MB 00:05:22.062 EAL: No shared files mode enabled, IPC is disabled 00:05:22.062 EAL: No shared files mode enabled, IPC is disabled 00:05:22.062 EAL: No shared files mode enabled, IPC is disabled 00:05:22.062 00:05:22.062 real 0m2.729s 00:05:22.062 user 0m1.386s 00:05:22.062 sys 0m1.198s 00:05:22.062 12:48:39 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.062 12:48:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:22.062 ************************************ 00:05:22.062 END TEST env_vtophys 00:05:22.062 ************************************ 00:05:22.062 12:48:39 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:22.062 12:48:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.062 12:48:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.062 12:48:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.062 ************************************ 00:05:22.062 START TEST env_pci 00:05:22.062 ************************************ 00:05:22.062 12:48:39 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:22.321 00:05:22.321 00:05:22.321 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.321 http://cunit.sourceforge.net/ 00:05:22.321 00:05:22.321 00:05:22.321 Suite: pci 00:05:22.321 Test: pci_hook ...[2024-11-26 12:48:39.755735] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69112 has claimed it 00:05:22.321 passed 00:05:22.321 00:05:22.321 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.321 suites 1 1 n/a 0 0 00:05:22.321 tests 1 1 1 0 0 00:05:22.321 asserts 25 25 25 0 n/a 00:05:22.321 00:05:22.321 Elapsed time = 0.005 seconds 00:05:22.321 EAL: Cannot find device (10000:00:01.0) 00:05:22.321 EAL: Failed to attach device on primary process 00:05:22.321 00:05:22.321 real 0m0.094s 00:05:22.321 user 0m0.039s 00:05:22.321 sys 0m0.054s 00:05:22.321 12:48:39 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.321 12:48:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:22.321 ************************************ 00:05:22.321 END TEST env_pci 00:05:22.321 ************************************ 00:05:22.321 12:48:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:22.321 12:48:39 env -- env/env.sh@15 -- # uname 00:05:22.321 12:48:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:22.321 12:48:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:22.321 12:48:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.321 12:48:39 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:22.321 12:48:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.321 12:48:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.321 ************************************ 00:05:22.321 START TEST env_dpdk_post_init 00:05:22.321 ************************************ 00:05:22.321 12:48:39 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.321 EAL: Detected CPU lcores: 10 00:05:22.321 EAL: Detected NUMA nodes: 1 00:05:22.321 EAL: Detected shared linkage of DPDK 00:05:22.321 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.321 EAL: Selected IOVA mode 'PA' 00:05:22.580 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.580 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:22.580 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:22.580 Starting DPDK initialization... 00:05:22.580 Starting SPDK post initialization... 00:05:22.580 SPDK NVMe probe 00:05:22.580 Attaching to 0000:00:10.0 00:05:22.580 Attaching to 0000:00:11.0 00:05:22.580 Attached to 0000:00:10.0 00:05:22.580 Attached to 0000:00:11.0 00:05:22.580 Cleaning up... 00:05:22.580 00:05:22.580 real 0m0.264s 00:05:22.580 user 0m0.076s 00:05:22.580 sys 0m0.091s 00:05:22.580 12:48:40 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.580 12:48:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.580 ************************************ 00:05:22.580 END TEST env_dpdk_post_init 00:05:22.580 ************************************ 00:05:22.580 12:48:40 env -- env/env.sh@26 -- # uname 00:05:22.580 12:48:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:22.580 12:48:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.580 12:48:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.580 12:48:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.580 12:48:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.580 ************************************ 00:05:22.580 START TEST env_mem_callbacks 00:05:22.580 ************************************ 00:05:22.580 12:48:40 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.839 EAL: Detected CPU lcores: 10 00:05:22.839 EAL: Detected NUMA nodes: 1 00:05:22.839 EAL: Detected shared linkage of DPDK 00:05:22.839 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.839 EAL: Selected IOVA mode 'PA' 00:05:22.839 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.839 00:05:22.839 00:05:22.839 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.839 http://cunit.sourceforge.net/ 00:05:22.839 00:05:22.839 00:05:22.839 Suite: memory 00:05:22.839 Test: test ... 00:05:22.839 register 0x200000200000 2097152 00:05:22.839 malloc 3145728 00:05:22.839 register 0x200000400000 4194304 00:05:22.839 buf 0x200000500000 len 3145728 PASSED 00:05:22.839 malloc 64 00:05:22.839 buf 0x2000004fff40 len 64 PASSED 00:05:22.839 malloc 4194304 00:05:22.839 register 0x200000800000 6291456 00:05:22.839 buf 0x200000a00000 len 4194304 PASSED 00:05:22.839 free 0x200000500000 3145728 00:05:22.839 free 0x2000004fff40 64 00:05:22.839 unregister 0x200000400000 4194304 PASSED 00:05:22.839 free 0x200000a00000 4194304 00:05:22.839 unregister 0x200000800000 6291456 PASSED 00:05:22.839 malloc 8388608 00:05:22.839 register 0x200000400000 10485760 00:05:22.839 buf 0x200000600000 len 8388608 PASSED 00:05:22.839 free 0x200000600000 8388608 00:05:22.839 unregister 0x200000400000 10485760 PASSED 00:05:22.839 passed 00:05:22.839 00:05:22.839 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.839 suites 1 1 n/a 0 0 00:05:22.839 tests 1 1 1 0 0 00:05:22.839 asserts 15 15 15 0 n/a 00:05:22.839 00:05:22.839 Elapsed time = 0.016 seconds 00:05:22.839 00:05:22.839 real 0m0.209s 00:05:22.839 user 0m0.038s 00:05:22.839 sys 0m0.069s 00:05:22.839 12:48:40 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.839 12:48:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:22.839 ************************************ 00:05:22.839 END TEST env_mem_callbacks 00:05:22.839 ************************************ 00:05:22.839 00:05:22.839 real 0m4.187s 00:05:22.839 user 0m2.023s 00:05:22.839 sys 0m1.830s 00:05:22.839 12:48:40 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.839 12:48:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.839 ************************************ 00:05:22.839 END TEST env 00:05:22.839 ************************************ 00:05:23.098 12:48:40 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:23.098 12:48:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.098 12:48:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.098 12:48:40 -- common/autotest_common.sh@10 -- # set +x 00:05:23.098 ************************************ 00:05:23.098 START TEST rpc 00:05:23.098 ************************************ 00:05:23.098 12:48:40 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:23.098 * Looking for test storage... 00:05:23.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.098 12:48:40 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.098 12:48:40 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.098 12:48:40 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.358 12:48:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.358 12:48:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.358 12:48:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.358 12:48:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.358 12:48:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.358 12:48:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.358 12:48:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.358 12:48:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.358 12:48:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.358 12:48:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.358 12:48:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.358 12:48:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.358 12:48:40 rpc -- scripts/common.sh@345 -- # : 1 00:05:23.358 12:48:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.358 12:48:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.358 12:48:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.358 12:48:40 rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.358 12:48:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.358 12:48:40 rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.358 12:48:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.358 12:48:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.358 12:48:40 rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.358 12:48:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.358 12:48:40 rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.358 12:48:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.358 12:48:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.358 12:48:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.358 12:48:40 rpc -- scripts/common.sh@368 -- # return 0 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.358 --rc genhtml_branch_coverage=1 00:05:23.358 --rc genhtml_function_coverage=1 00:05:23.358 --rc genhtml_legend=1 00:05:23.358 --rc geninfo_all_blocks=1 00:05:23.358 --rc geninfo_unexecuted_blocks=1 00:05:23.358 00:05:23.358 ' 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.358 --rc genhtml_branch_coverage=1 00:05:23.358 --rc genhtml_function_coverage=1 00:05:23.358 --rc genhtml_legend=1 00:05:23.358 --rc geninfo_all_blocks=1 00:05:23.358 --rc geninfo_unexecuted_blocks=1 00:05:23.358 00:05:23.358 ' 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.358 --rc genhtml_branch_coverage=1 00:05:23.358 --rc genhtml_function_coverage=1 00:05:23.358 --rc genhtml_legend=1 00:05:23.358 --rc geninfo_all_blocks=1 00:05:23.358 --rc geninfo_unexecuted_blocks=1 00:05:23.358 00:05:23.358 ' 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.358 --rc genhtml_branch_coverage=1 00:05:23.358 --rc genhtml_function_coverage=1 00:05:23.358 --rc genhtml_legend=1 00:05:23.358 --rc geninfo_all_blocks=1 00:05:23.358 --rc geninfo_unexecuted_blocks=1 00:05:23.358 00:05:23.358 ' 00:05:23.358 12:48:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69239 00:05:23.358 12:48:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:23.358 12:48:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.358 12:48:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69239 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@831 -- # '[' -z 69239 ']' 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.358 12:48:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.358 [2024-11-26 12:48:40.909625] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:23.358 [2024-11-26 12:48:40.909750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69239 ] 00:05:23.618 [2024-11-26 12:48:41.074975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.618 [2024-11-26 12:48:41.156671] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:23.618 [2024-11-26 12:48:41.156752] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69239' to capture a snapshot of events at runtime. 00:05:23.618 [2024-11-26 12:48:41.156766] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:23.618 [2024-11-26 12:48:41.156776] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:23.618 [2024-11-26 12:48:41.156799] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69239 for offline analysis/debug. 00:05:23.618 [2024-11-26 12:48:41.156847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.186 12:48:41 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.186 12:48:41 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:24.186 12:48:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.186 12:48:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.186 12:48:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:24.186 12:48:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:24.186 12:48:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.186 12:48:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.186 12:48:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.186 ************************************ 00:05:24.186 START TEST rpc_integrity 00:05:24.186 ************************************ 00:05:24.186 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:24.186 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:24.186 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.186 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.186 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.186 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:24.186 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:24.186 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:24.186 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.186 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.186 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.186 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.186 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:24.186 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:24.186 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.186 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.186 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.186 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:24.186 { 00:05:24.186 "name": "Malloc0", 00:05:24.186 "aliases": [ 00:05:24.186 "4a06dfb9-cfb4-4f7f-a839-591ceab38e31" 00:05:24.186 ], 00:05:24.186 "product_name": "Malloc disk", 00:05:24.186 "block_size": 512, 00:05:24.186 "num_blocks": 16384, 00:05:24.186 "uuid": "4a06dfb9-cfb4-4f7f-a839-591ceab38e31", 00:05:24.186 "assigned_rate_limits": { 00:05:24.186 "rw_ios_per_sec": 0, 00:05:24.186 "rw_mbytes_per_sec": 0, 00:05:24.186 "r_mbytes_per_sec": 0, 00:05:24.186 "w_mbytes_per_sec": 0 00:05:24.186 }, 00:05:24.186 "claimed": false, 00:05:24.186 "zoned": false, 00:05:24.186 "supported_io_types": { 00:05:24.186 "read": true, 00:05:24.186 "write": true, 00:05:24.186 "unmap": true, 00:05:24.186 "flush": true, 00:05:24.186 "reset": true, 00:05:24.186 "nvme_admin": false, 00:05:24.186 "nvme_io": false, 00:05:24.186 "nvme_io_md": false, 00:05:24.186 "write_zeroes": true, 00:05:24.186 "zcopy": true, 00:05:24.186 "get_zone_info": false, 00:05:24.186 "zone_management": false, 00:05:24.186 "zone_append": false, 00:05:24.186 "compare": false, 00:05:24.186 "compare_and_write": false, 00:05:24.186 "abort": true, 00:05:24.186 "seek_hole": false, 00:05:24.186 "seek_data": false, 00:05:24.186 "copy": true, 00:05:24.186 "nvme_iov_md": false 00:05:24.186 }, 00:05:24.186 "memory_domains": [ 00:05:24.186 { 00:05:24.186 "dma_device_id": "system", 00:05:24.186 "dma_device_type": 1 00:05:24.186 }, 00:05:24.186 { 00:05:24.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.186 "dma_device_type": 2 00:05:24.186 } 00:05:24.186 ], 00:05:24.186 "driver_specific": {} 00:05:24.186 } 00:05:24.186 ]' 00:05:24.186 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:24.446 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.446 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.446 [2024-11-26 12:48:41.911342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:24.446 [2024-11-26 12:48:41.911484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.446 [2024-11-26 12:48:41.911551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:24.446 [2024-11-26 12:48:41.911571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.446 [2024-11-26 12:48:41.914795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.446 [2024-11-26 12:48:41.914857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.446 Passthru0 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.446 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.446 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.446 { 00:05:24.446 "name": "Malloc0", 00:05:24.446 "aliases": [ 00:05:24.446 "4a06dfb9-cfb4-4f7f-a839-591ceab38e31" 00:05:24.446 ], 00:05:24.446 "product_name": "Malloc disk", 00:05:24.446 "block_size": 512, 00:05:24.446 "num_blocks": 16384, 00:05:24.446 "uuid": "4a06dfb9-cfb4-4f7f-a839-591ceab38e31", 00:05:24.446 "assigned_rate_limits": { 00:05:24.446 "rw_ios_per_sec": 0, 00:05:24.446 "rw_mbytes_per_sec": 0, 00:05:24.446 "r_mbytes_per_sec": 0, 00:05:24.446 "w_mbytes_per_sec": 0 00:05:24.446 }, 00:05:24.446 "claimed": true, 00:05:24.446 "claim_type": "exclusive_write", 00:05:24.446 "zoned": false, 00:05:24.446 "supported_io_types": { 00:05:24.446 "read": true, 00:05:24.446 "write": true, 00:05:24.446 "unmap": true, 00:05:24.446 "flush": true, 00:05:24.446 "reset": true, 00:05:24.446 "nvme_admin": false, 00:05:24.446 "nvme_io": false, 00:05:24.446 "nvme_io_md": false, 00:05:24.446 "write_zeroes": true, 00:05:24.446 "zcopy": true, 00:05:24.446 "get_zone_info": false, 00:05:24.446 "zone_management": false, 00:05:24.446 "zone_append": false, 00:05:24.446 "compare": false, 00:05:24.446 "compare_and_write": false, 00:05:24.446 "abort": true, 00:05:24.446 "seek_hole": false, 00:05:24.446 "seek_data": false, 00:05:24.446 "copy": true, 00:05:24.446 "nvme_iov_md": false 00:05:24.446 }, 00:05:24.446 "memory_domains": [ 00:05:24.446 { 00:05:24.446 "dma_device_id": "system", 00:05:24.446 "dma_device_type": 1 00:05:24.446 }, 00:05:24.446 { 00:05:24.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.446 "dma_device_type": 2 00:05:24.446 } 00:05:24.446 ], 00:05:24.446 "driver_specific": {} 00:05:24.446 }, 00:05:24.446 { 00:05:24.446 "name": "Passthru0", 00:05:24.446 "aliases": [ 00:05:24.446 "5024bb8f-b753-5dc4-b267-42f680bc4db0" 00:05:24.446 ], 00:05:24.446 "product_name": "passthru", 00:05:24.446 "block_size": 512, 00:05:24.446 "num_blocks": 16384, 00:05:24.446 "uuid": "5024bb8f-b753-5dc4-b267-42f680bc4db0", 00:05:24.446 "assigned_rate_limits": { 00:05:24.446 "rw_ios_per_sec": 0, 00:05:24.446 "rw_mbytes_per_sec": 0, 00:05:24.446 "r_mbytes_per_sec": 0, 00:05:24.446 "w_mbytes_per_sec": 0 00:05:24.446 }, 00:05:24.446 "claimed": false, 00:05:24.446 "zoned": false, 00:05:24.446 "supported_io_types": { 00:05:24.446 "read": true, 00:05:24.446 "write": true, 00:05:24.446 "unmap": true, 00:05:24.446 "flush": true, 00:05:24.446 "reset": true, 00:05:24.446 "nvme_admin": false, 00:05:24.446 "nvme_io": false, 00:05:24.446 "nvme_io_md": false, 00:05:24.446 "write_zeroes": true, 00:05:24.446 "zcopy": true, 00:05:24.446 "get_zone_info": false, 00:05:24.446 "zone_management": false, 00:05:24.446 "zone_append": false, 00:05:24.446 "compare": false, 00:05:24.446 "compare_and_write": false, 00:05:24.446 "abort": true, 00:05:24.446 "seek_hole": false, 00:05:24.446 "seek_data": false, 00:05:24.446 "copy": true, 00:05:24.446 "nvme_iov_md": false 00:05:24.446 }, 00:05:24.446 "memory_domains": [ 00:05:24.446 { 00:05:24.446 "dma_device_id": "system", 00:05:24.446 "dma_device_type": 1 00:05:24.446 }, 00:05:24.446 { 00:05:24.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.446 "dma_device_type": 2 00:05:24.446 } 00:05:24.446 ], 00:05:24.446 "driver_specific": { 00:05:24.446 "passthru": { 00:05:24.446 "name": "Passthru0", 00:05:24.446 "base_bdev_name": "Malloc0" 00:05:24.446 } 00:05:24.446 } 00:05:24.446 } 00:05:24.446 ]' 00:05:24.446 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:24.446 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:24.446 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.446 12:48:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.446 12:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.446 12:48:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.446 12:48:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.446 12:48:42 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.446 12:48:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.446 12:48:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.446 12:48:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.446 12:48:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:24.446 12:48:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.446 00:05:24.446 real 0m0.318s 00:05:24.446 user 0m0.181s 00:05:24.446 sys 0m0.058s 00:05:24.446 12:48:42 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.446 12:48:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.446 ************************************ 00:05:24.446 END TEST rpc_integrity 00:05:24.446 ************************************ 00:05:24.446 12:48:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:24.446 12:48:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.446 12:48:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.446 12:48:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.705 ************************************ 00:05:24.705 START TEST rpc_plugins 00:05:24.705 ************************************ 00:05:24.705 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:24.705 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:24.705 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.705 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.705 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.705 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:24.705 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:24.705 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.705 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.705 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.705 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:24.705 { 00:05:24.705 "name": "Malloc1", 00:05:24.705 "aliases": [ 00:05:24.705 "efc31541-2723-4e05-a67c-7aa7627efdea" 00:05:24.705 ], 00:05:24.705 "product_name": "Malloc disk", 00:05:24.705 "block_size": 4096, 00:05:24.705 "num_blocks": 256, 00:05:24.705 "uuid": "efc31541-2723-4e05-a67c-7aa7627efdea", 00:05:24.705 "assigned_rate_limits": { 00:05:24.705 "rw_ios_per_sec": 0, 00:05:24.705 "rw_mbytes_per_sec": 0, 00:05:24.705 "r_mbytes_per_sec": 0, 00:05:24.705 "w_mbytes_per_sec": 0 00:05:24.705 }, 00:05:24.705 "claimed": false, 00:05:24.705 "zoned": false, 00:05:24.705 "supported_io_types": { 00:05:24.705 "read": true, 00:05:24.705 "write": true, 00:05:24.705 "unmap": true, 00:05:24.705 "flush": true, 00:05:24.705 "reset": true, 00:05:24.705 "nvme_admin": false, 00:05:24.705 "nvme_io": false, 00:05:24.705 "nvme_io_md": false, 00:05:24.705 "write_zeroes": true, 00:05:24.705 "zcopy": true, 00:05:24.705 "get_zone_info": false, 00:05:24.705 "zone_management": false, 00:05:24.705 "zone_append": false, 00:05:24.705 "compare": false, 00:05:24.705 "compare_and_write": false, 00:05:24.705 "abort": true, 00:05:24.705 "seek_hole": false, 00:05:24.705 "seek_data": false, 00:05:24.705 "copy": true, 00:05:24.705 "nvme_iov_md": false 00:05:24.705 }, 00:05:24.705 "memory_domains": [ 00:05:24.705 { 00:05:24.705 "dma_device_id": "system", 00:05:24.705 "dma_device_type": 1 00:05:24.705 }, 00:05:24.705 { 00:05:24.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.705 "dma_device_type": 2 00:05:24.705 } 00:05:24.705 ], 00:05:24.705 "driver_specific": {} 00:05:24.705 } 00:05:24.705 ]' 00:05:24.705 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:24.705 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:24.705 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:24.705 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.705 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.705 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.706 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:24.706 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.706 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.706 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.706 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:24.706 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:24.706 12:48:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:24.706 00:05:24.706 real 0m0.166s 00:05:24.706 user 0m0.098s 00:05:24.706 sys 0m0.030s 00:05:24.706 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.706 12:48:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.706 ************************************ 00:05:24.706 END TEST rpc_plugins 00:05:24.706 ************************************ 00:05:24.706 12:48:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:24.706 12:48:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.706 12:48:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.706 12:48:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.706 ************************************ 00:05:24.706 START TEST rpc_trace_cmd_test 00:05:24.706 ************************************ 00:05:24.706 12:48:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:24.706 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:24.706 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:24.706 12:48:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.706 12:48:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:24.965 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69239", 00:05:24.965 "tpoint_group_mask": "0x8", 00:05:24.965 "iscsi_conn": { 00:05:24.965 "mask": "0x2", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "scsi": { 00:05:24.965 "mask": "0x4", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "bdev": { 00:05:24.965 "mask": "0x8", 00:05:24.965 "tpoint_mask": "0xffffffffffffffff" 00:05:24.965 }, 00:05:24.965 "nvmf_rdma": { 00:05:24.965 "mask": "0x10", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "nvmf_tcp": { 00:05:24.965 "mask": "0x20", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "ftl": { 00:05:24.965 "mask": "0x40", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "blobfs": { 00:05:24.965 "mask": "0x80", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "dsa": { 00:05:24.965 "mask": "0x200", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "thread": { 00:05:24.965 "mask": "0x400", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "nvme_pcie": { 00:05:24.965 "mask": "0x800", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "iaa": { 00:05:24.965 "mask": "0x1000", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "nvme_tcp": { 00:05:24.965 "mask": "0x2000", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "bdev_nvme": { 00:05:24.965 "mask": "0x4000", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "sock": { 00:05:24.965 "mask": "0x8000", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "blob": { 00:05:24.965 "mask": "0x10000", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 }, 00:05:24.965 "bdev_raid": { 00:05:24.965 "mask": "0x20000", 00:05:24.965 "tpoint_mask": "0x0" 00:05:24.965 } 00:05:24.965 }' 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:24.965 00:05:24.965 real 0m0.267s 00:05:24.965 user 0m0.209s 00:05:24.965 sys 0m0.048s 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.965 12:48:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.965 ************************************ 00:05:24.965 END TEST rpc_trace_cmd_test 00:05:24.965 ************************************ 00:05:25.226 12:48:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:25.226 12:48:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:25.226 12:48:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:25.226 12:48:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.226 12:48:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.226 12:48:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.226 ************************************ 00:05:25.226 START TEST rpc_daemon_integrity 00:05:25.226 ************************************ 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.226 { 00:05:25.226 "name": "Malloc2", 00:05:25.226 "aliases": [ 00:05:25.226 "f0ff8499-c557-4ba4-a5b4-3c6cc2a1b8a8" 00:05:25.226 ], 00:05:25.226 "product_name": "Malloc disk", 00:05:25.226 "block_size": 512, 00:05:25.226 "num_blocks": 16384, 00:05:25.226 "uuid": "f0ff8499-c557-4ba4-a5b4-3c6cc2a1b8a8", 00:05:25.226 "assigned_rate_limits": { 00:05:25.226 "rw_ios_per_sec": 0, 00:05:25.226 "rw_mbytes_per_sec": 0, 00:05:25.226 "r_mbytes_per_sec": 0, 00:05:25.226 "w_mbytes_per_sec": 0 00:05:25.226 }, 00:05:25.226 "claimed": false, 00:05:25.226 "zoned": false, 00:05:25.226 "supported_io_types": { 00:05:25.226 "read": true, 00:05:25.226 "write": true, 00:05:25.226 "unmap": true, 00:05:25.226 "flush": true, 00:05:25.226 "reset": true, 00:05:25.226 "nvme_admin": false, 00:05:25.226 "nvme_io": false, 00:05:25.226 "nvme_io_md": false, 00:05:25.226 "write_zeroes": true, 00:05:25.226 "zcopy": true, 00:05:25.226 "get_zone_info": false, 00:05:25.226 "zone_management": false, 00:05:25.226 "zone_append": false, 00:05:25.226 "compare": false, 00:05:25.226 "compare_and_write": false, 00:05:25.226 "abort": true, 00:05:25.226 "seek_hole": false, 00:05:25.226 "seek_data": false, 00:05:25.226 "copy": true, 00:05:25.226 "nvme_iov_md": false 00:05:25.226 }, 00:05:25.226 "memory_domains": [ 00:05:25.226 { 00:05:25.226 "dma_device_id": "system", 00:05:25.226 "dma_device_type": 1 00:05:25.226 }, 00:05:25.226 { 00:05:25.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.226 "dma_device_type": 2 00:05:25.226 } 00:05:25.226 ], 00:05:25.226 "driver_specific": {} 00:05:25.226 } 00:05:25.226 ]' 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.226 [2024-11-26 12:48:42.851146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:25.226 [2024-11-26 12:48:42.851229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.226 [2024-11-26 12:48:42.851260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:25.226 [2024-11-26 12:48:42.851270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.226 [2024-11-26 12:48:42.853944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.226 [2024-11-26 12:48:42.853983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.226 Passthru0 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.226 { 00:05:25.226 "name": "Malloc2", 00:05:25.226 "aliases": [ 00:05:25.226 "f0ff8499-c557-4ba4-a5b4-3c6cc2a1b8a8" 00:05:25.226 ], 00:05:25.226 "product_name": "Malloc disk", 00:05:25.226 "block_size": 512, 00:05:25.226 "num_blocks": 16384, 00:05:25.226 "uuid": "f0ff8499-c557-4ba4-a5b4-3c6cc2a1b8a8", 00:05:25.226 "assigned_rate_limits": { 00:05:25.226 "rw_ios_per_sec": 0, 00:05:25.226 "rw_mbytes_per_sec": 0, 00:05:25.226 "r_mbytes_per_sec": 0, 00:05:25.226 "w_mbytes_per_sec": 0 00:05:25.226 }, 00:05:25.226 "claimed": true, 00:05:25.226 "claim_type": "exclusive_write", 00:05:25.226 "zoned": false, 00:05:25.226 "supported_io_types": { 00:05:25.226 "read": true, 00:05:25.226 "write": true, 00:05:25.226 "unmap": true, 00:05:25.226 "flush": true, 00:05:25.226 "reset": true, 00:05:25.226 "nvme_admin": false, 00:05:25.226 "nvme_io": false, 00:05:25.226 "nvme_io_md": false, 00:05:25.226 "write_zeroes": true, 00:05:25.226 "zcopy": true, 00:05:25.226 "get_zone_info": false, 00:05:25.226 "zone_management": false, 00:05:25.226 "zone_append": false, 00:05:25.226 "compare": false, 00:05:25.226 "compare_and_write": false, 00:05:25.226 "abort": true, 00:05:25.226 "seek_hole": false, 00:05:25.226 "seek_data": false, 00:05:25.226 "copy": true, 00:05:25.226 "nvme_iov_md": false 00:05:25.226 }, 00:05:25.226 "memory_domains": [ 00:05:25.226 { 00:05:25.226 "dma_device_id": "system", 00:05:25.226 "dma_device_type": 1 00:05:25.226 }, 00:05:25.226 { 00:05:25.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.226 "dma_device_type": 2 00:05:25.226 } 00:05:25.226 ], 00:05:25.226 "driver_specific": {} 00:05:25.226 }, 00:05:25.226 { 00:05:25.226 "name": "Passthru0", 00:05:25.226 "aliases": [ 00:05:25.226 "3ebbc466-483e-5b06-9e31-b0920832dca2" 00:05:25.226 ], 00:05:25.226 "product_name": "passthru", 00:05:25.226 "block_size": 512, 00:05:25.226 "num_blocks": 16384, 00:05:25.226 "uuid": "3ebbc466-483e-5b06-9e31-b0920832dca2", 00:05:25.226 "assigned_rate_limits": { 00:05:25.226 "rw_ios_per_sec": 0, 00:05:25.226 "rw_mbytes_per_sec": 0, 00:05:25.226 "r_mbytes_per_sec": 0, 00:05:25.226 "w_mbytes_per_sec": 0 00:05:25.226 }, 00:05:25.226 "claimed": false, 00:05:25.226 "zoned": false, 00:05:25.226 "supported_io_types": { 00:05:25.226 "read": true, 00:05:25.226 "write": true, 00:05:25.226 "unmap": true, 00:05:25.226 "flush": true, 00:05:25.226 "reset": true, 00:05:25.226 "nvme_admin": false, 00:05:25.226 "nvme_io": false, 00:05:25.226 "nvme_io_md": false, 00:05:25.226 "write_zeroes": true, 00:05:25.226 "zcopy": true, 00:05:25.226 "get_zone_info": false, 00:05:25.226 "zone_management": false, 00:05:25.226 "zone_append": false, 00:05:25.226 "compare": false, 00:05:25.226 "compare_and_write": false, 00:05:25.226 "abort": true, 00:05:25.226 "seek_hole": false, 00:05:25.226 "seek_data": false, 00:05:25.226 "copy": true, 00:05:25.226 "nvme_iov_md": false 00:05:25.226 }, 00:05:25.226 "memory_domains": [ 00:05:25.226 { 00:05:25.226 "dma_device_id": "system", 00:05:25.226 "dma_device_type": 1 00:05:25.226 }, 00:05:25.226 { 00:05:25.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.226 "dma_device_type": 2 00:05:25.226 } 00:05:25.226 ], 00:05:25.226 "driver_specific": { 00:05:25.226 "passthru": { 00:05:25.226 "name": "Passthru0", 00:05:25.226 "base_bdev_name": "Malloc2" 00:05:25.226 } 00:05:25.226 } 00:05:25.226 } 00:05:25.226 ]' 00:05:25.226 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.487 12:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.487 12:48:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.487 00:05:25.487 real 0m0.323s 00:05:25.487 user 0m0.190s 00:05:25.487 sys 0m0.063s 00:05:25.487 12:48:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.487 12:48:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.487 ************************************ 00:05:25.487 END TEST rpc_daemon_integrity 00:05:25.487 ************************************ 00:05:25.487 12:48:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:25.487 12:48:43 rpc -- rpc/rpc.sh@84 -- # killprocess 69239 00:05:25.487 12:48:43 rpc -- common/autotest_common.sh@950 -- # '[' -z 69239 ']' 00:05:25.487 12:48:43 rpc -- common/autotest_common.sh@954 -- # kill -0 69239 00:05:25.487 12:48:43 rpc -- common/autotest_common.sh@955 -- # uname 00:05:25.487 12:48:43 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.487 12:48:43 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69239 00:05:25.487 12:48:43 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.487 12:48:43 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.487 killing process with pid 69239 00:05:25.487 12:48:43 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69239' 00:05:25.487 12:48:43 rpc -- common/autotest_common.sh@969 -- # kill 69239 00:05:25.487 12:48:43 rpc -- common/autotest_common.sh@974 -- # wait 69239 00:05:26.428 00:05:26.428 real 0m3.222s 00:05:26.428 user 0m3.649s 00:05:26.428 sys 0m1.079s 00:05:26.428 12:48:43 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.428 12:48:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.428 ************************************ 00:05:26.428 END TEST rpc 00:05:26.428 ************************************ 00:05:26.428 12:48:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:26.428 12:48:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.428 12:48:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.428 12:48:43 -- common/autotest_common.sh@10 -- # set +x 00:05:26.428 ************************************ 00:05:26.428 START TEST skip_rpc 00:05:26.428 ************************************ 00:05:26.428 12:48:43 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:26.428 * Looking for test storage... 00:05:26.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.428 12:48:43 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:26.428 12:48:43 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:26.428 12:48:43 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:26.428 12:48:44 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.428 12:48:44 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:26.428 12:48:44 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.428 12:48:44 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:26.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.428 --rc genhtml_branch_coverage=1 00:05:26.428 --rc genhtml_function_coverage=1 00:05:26.428 --rc genhtml_legend=1 00:05:26.428 --rc geninfo_all_blocks=1 00:05:26.428 --rc geninfo_unexecuted_blocks=1 00:05:26.428 00:05:26.428 ' 00:05:26.428 12:48:44 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:26.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.428 --rc genhtml_branch_coverage=1 00:05:26.428 --rc genhtml_function_coverage=1 00:05:26.428 --rc genhtml_legend=1 00:05:26.428 --rc geninfo_all_blocks=1 00:05:26.428 --rc geninfo_unexecuted_blocks=1 00:05:26.428 00:05:26.428 ' 00:05:26.428 12:48:44 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:26.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.429 --rc genhtml_branch_coverage=1 00:05:26.429 --rc genhtml_function_coverage=1 00:05:26.429 --rc genhtml_legend=1 00:05:26.429 --rc geninfo_all_blocks=1 00:05:26.429 --rc geninfo_unexecuted_blocks=1 00:05:26.429 00:05:26.429 ' 00:05:26.429 12:48:44 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:26.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.429 --rc genhtml_branch_coverage=1 00:05:26.429 --rc genhtml_function_coverage=1 00:05:26.429 --rc genhtml_legend=1 00:05:26.429 --rc geninfo_all_blocks=1 00:05:26.429 --rc geninfo_unexecuted_blocks=1 00:05:26.429 00:05:26.429 ' 00:05:26.429 12:48:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.429 12:48:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.429 12:48:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:26.429 12:48:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.429 12:48:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.429 12:48:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.429 ************************************ 00:05:26.429 START TEST skip_rpc 00:05:26.429 ************************************ 00:05:26.429 12:48:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:26.429 12:48:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69446 00:05:26.429 12:48:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.429 12:48:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:26.429 12:48:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:26.688 [2024-11-26 12:48:44.214248] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:26.688 [2024-11-26 12:48:44.214398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69446 ] 00:05:26.948 [2024-11-26 12:48:44.368785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.948 [2024-11-26 12:48:44.449066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69446 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69446 ']' 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69446 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69446 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.239 killing process with pid 69446 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69446' 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69446 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69446 00:05:32.239 00:05:32.239 real 0m5.721s 00:05:32.239 user 0m5.135s 00:05:32.239 sys 0m0.505s 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.239 12:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.239 ************************************ 00:05:32.239 END TEST skip_rpc 00:05:32.239 ************************************ 00:05:32.239 12:48:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:32.239 12:48:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.239 12:48:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.239 12:48:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.239 ************************************ 00:05:32.239 START TEST skip_rpc_with_json 00:05:32.239 ************************************ 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69534 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69534 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69534 ']' 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.239 12:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.498 [2024-11-26 12:48:49.997639] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:32.498 [2024-11-26 12:48:49.997807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69534 ] 00:05:32.498 [2024-11-26 12:48:50.152741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.756 [2024-11-26 12:48:50.233039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.325 [2024-11-26 12:48:50.846190] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:33.325 request: 00:05:33.325 { 00:05:33.325 "trtype": "tcp", 00:05:33.325 "method": "nvmf_get_transports", 00:05:33.325 "req_id": 1 00:05:33.325 } 00:05:33.325 Got JSON-RPC error response 00:05:33.325 response: 00:05:33.325 { 00:05:33.325 "code": -19, 00:05:33.325 "message": "No such device" 00:05:33.325 } 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.325 [2024-11-26 12:48:50.858342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.325 12:48:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.585 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.585 12:48:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.585 { 00:05:33.585 "subsystems": [ 00:05:33.585 { 00:05:33.585 "subsystem": "fsdev", 00:05:33.585 "config": [ 00:05:33.585 { 00:05:33.585 "method": "fsdev_set_opts", 00:05:33.585 "params": { 00:05:33.585 "fsdev_io_pool_size": 65535, 00:05:33.585 "fsdev_io_cache_size": 256 00:05:33.585 } 00:05:33.585 } 00:05:33.585 ] 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "keyring", 00:05:33.585 "config": [] 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "iobuf", 00:05:33.585 "config": [ 00:05:33.585 { 00:05:33.585 "method": "iobuf_set_options", 00:05:33.585 "params": { 00:05:33.585 "small_pool_count": 8192, 00:05:33.585 "large_pool_count": 1024, 00:05:33.585 "small_bufsize": 8192, 00:05:33.585 "large_bufsize": 135168 00:05:33.585 } 00:05:33.585 } 00:05:33.585 ] 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "sock", 00:05:33.585 "config": [ 00:05:33.585 { 00:05:33.585 "method": "sock_set_default_impl", 00:05:33.585 "params": { 00:05:33.585 "impl_name": "posix" 00:05:33.585 } 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "method": "sock_impl_set_options", 00:05:33.585 "params": { 00:05:33.585 "impl_name": "ssl", 00:05:33.585 "recv_buf_size": 4096, 00:05:33.585 "send_buf_size": 4096, 00:05:33.585 "enable_recv_pipe": true, 00:05:33.585 "enable_quickack": false, 00:05:33.585 "enable_placement_id": 0, 00:05:33.585 "enable_zerocopy_send_server": true, 00:05:33.585 "enable_zerocopy_send_client": false, 00:05:33.585 "zerocopy_threshold": 0, 00:05:33.585 "tls_version": 0, 00:05:33.585 "enable_ktls": false 00:05:33.585 } 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "method": "sock_impl_set_options", 00:05:33.585 "params": { 00:05:33.585 "impl_name": "posix", 00:05:33.585 "recv_buf_size": 2097152, 00:05:33.585 "send_buf_size": 2097152, 00:05:33.585 "enable_recv_pipe": true, 00:05:33.585 "enable_quickack": false, 00:05:33.585 "enable_placement_id": 0, 00:05:33.585 "enable_zerocopy_send_server": true, 00:05:33.585 "enable_zerocopy_send_client": false, 00:05:33.585 "zerocopy_threshold": 0, 00:05:33.585 "tls_version": 0, 00:05:33.585 "enable_ktls": false 00:05:33.585 } 00:05:33.585 } 00:05:33.585 ] 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "vmd", 00:05:33.585 "config": [] 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "accel", 00:05:33.585 "config": [ 00:05:33.585 { 00:05:33.585 "method": "accel_set_options", 00:05:33.585 "params": { 00:05:33.585 "small_cache_size": 128, 00:05:33.585 "large_cache_size": 16, 00:05:33.585 "task_count": 2048, 00:05:33.585 "sequence_count": 2048, 00:05:33.585 "buf_count": 2048 00:05:33.585 } 00:05:33.585 } 00:05:33.585 ] 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "bdev", 00:05:33.585 "config": [ 00:05:33.585 { 00:05:33.585 "method": "bdev_set_options", 00:05:33.585 "params": { 00:05:33.585 "bdev_io_pool_size": 65535, 00:05:33.585 "bdev_io_cache_size": 256, 00:05:33.585 "bdev_auto_examine": true, 00:05:33.585 "iobuf_small_cache_size": 128, 00:05:33.585 "iobuf_large_cache_size": 16 00:05:33.585 } 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "method": "bdev_raid_set_options", 00:05:33.585 "params": { 00:05:33.585 "process_window_size_kb": 1024, 00:05:33.585 "process_max_bandwidth_mb_sec": 0 00:05:33.585 } 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "method": "bdev_iscsi_set_options", 00:05:33.585 "params": { 00:05:33.585 "timeout_sec": 30 00:05:33.585 } 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "method": "bdev_nvme_set_options", 00:05:33.585 "params": { 00:05:33.585 "action_on_timeout": "none", 00:05:33.585 "timeout_us": 0, 00:05:33.585 "timeout_admin_us": 0, 00:05:33.585 "keep_alive_timeout_ms": 10000, 00:05:33.585 "arbitration_burst": 0, 00:05:33.585 "low_priority_weight": 0, 00:05:33.585 "medium_priority_weight": 0, 00:05:33.585 "high_priority_weight": 0, 00:05:33.585 "nvme_adminq_poll_period_us": 10000, 00:05:33.585 "nvme_ioq_poll_period_us": 0, 00:05:33.585 "io_queue_requests": 0, 00:05:33.585 "delay_cmd_submit": true, 00:05:33.585 "transport_retry_count": 4, 00:05:33.585 "bdev_retry_count": 3, 00:05:33.585 "transport_ack_timeout": 0, 00:05:33.585 "ctrlr_loss_timeout_sec": 0, 00:05:33.585 "reconnect_delay_sec": 0, 00:05:33.585 "fast_io_fail_timeout_sec": 0, 00:05:33.585 "disable_auto_failback": false, 00:05:33.585 "generate_uuids": false, 00:05:33.585 "transport_tos": 0, 00:05:33.585 "nvme_error_stat": false, 00:05:33.585 "rdma_srq_size": 0, 00:05:33.585 "io_path_stat": false, 00:05:33.585 "allow_accel_sequence": false, 00:05:33.585 "rdma_max_cq_size": 0, 00:05:33.585 "rdma_cm_event_timeout_ms": 0, 00:05:33.585 "dhchap_digests": [ 00:05:33.585 "sha256", 00:05:33.585 "sha384", 00:05:33.585 "sha512" 00:05:33.585 ], 00:05:33.585 "dhchap_dhgroups": [ 00:05:33.585 "null", 00:05:33.585 "ffdhe2048", 00:05:33.585 "ffdhe3072", 00:05:33.585 "ffdhe4096", 00:05:33.585 "ffdhe6144", 00:05:33.585 "ffdhe8192" 00:05:33.585 ] 00:05:33.585 } 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "method": "bdev_nvme_set_hotplug", 00:05:33.585 "params": { 00:05:33.585 "period_us": 100000, 00:05:33.585 "enable": false 00:05:33.585 } 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "method": "bdev_wait_for_examine" 00:05:33.585 } 00:05:33.585 ] 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "scsi", 00:05:33.585 "config": null 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "scheduler", 00:05:33.585 "config": [ 00:05:33.585 { 00:05:33.585 "method": "framework_set_scheduler", 00:05:33.585 "params": { 00:05:33.585 "name": "static" 00:05:33.585 } 00:05:33.585 } 00:05:33.585 ] 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "vhost_scsi", 00:05:33.585 "config": [] 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "vhost_blk", 00:05:33.585 "config": [] 00:05:33.585 }, 00:05:33.585 { 00:05:33.585 "subsystem": "ublk", 00:05:33.585 "config": [] 00:05:33.586 }, 00:05:33.586 { 00:05:33.586 "subsystem": "nbd", 00:05:33.586 "config": [] 00:05:33.586 }, 00:05:33.586 { 00:05:33.586 "subsystem": "nvmf", 00:05:33.586 "config": [ 00:05:33.586 { 00:05:33.586 "method": "nvmf_set_config", 00:05:33.586 "params": { 00:05:33.586 "discovery_filter": "match_any", 00:05:33.586 "admin_cmd_passthru": { 00:05:33.586 "identify_ctrlr": false 00:05:33.586 }, 00:05:33.586 "dhchap_digests": [ 00:05:33.586 "sha256", 00:05:33.586 "sha384", 00:05:33.586 "sha512" 00:05:33.586 ], 00:05:33.586 "dhchap_dhgroups": [ 00:05:33.586 "null", 00:05:33.586 "ffdhe2048", 00:05:33.586 "ffdhe3072", 00:05:33.586 "ffdhe4096", 00:05:33.586 "ffdhe6144", 00:05:33.586 "ffdhe8192" 00:05:33.586 ] 00:05:33.586 } 00:05:33.586 }, 00:05:33.586 { 00:05:33.586 "method": "nvmf_set_max_subsystems", 00:05:33.586 "params": { 00:05:33.586 "max_subsystems": 1024 00:05:33.586 } 00:05:33.586 }, 00:05:33.586 { 00:05:33.586 "method": "nvmf_set_crdt", 00:05:33.586 "params": { 00:05:33.586 "crdt1": 0, 00:05:33.586 "crdt2": 0, 00:05:33.586 "crdt3": 0 00:05:33.586 } 00:05:33.586 }, 00:05:33.586 { 00:05:33.586 "method": "nvmf_create_transport", 00:05:33.586 "params": { 00:05:33.586 "trtype": "TCP", 00:05:33.586 "max_queue_depth": 128, 00:05:33.586 "max_io_qpairs_per_ctrlr": 127, 00:05:33.586 "in_capsule_data_size": 4096, 00:05:33.586 "max_io_size": 131072, 00:05:33.586 "io_unit_size": 131072, 00:05:33.586 "max_aq_depth": 128, 00:05:33.586 "num_shared_buffers": 511, 00:05:33.586 "buf_cache_size": 4294967295, 00:05:33.586 "dif_insert_or_strip": false, 00:05:33.586 "zcopy": false, 00:05:33.586 "c2h_success": true, 00:05:33.586 "sock_priority": 0, 00:05:33.586 "abort_timeout_sec": 1, 00:05:33.586 "ack_timeout": 0, 00:05:33.586 "data_wr_pool_size": 0 00:05:33.586 } 00:05:33.586 } 00:05:33.586 ] 00:05:33.586 }, 00:05:33.586 { 00:05:33.586 "subsystem": "iscsi", 00:05:33.586 "config": [ 00:05:33.586 { 00:05:33.586 "method": "iscsi_set_options", 00:05:33.586 "params": { 00:05:33.586 "node_base": "iqn.2016-06.io.spdk", 00:05:33.586 "max_sessions": 128, 00:05:33.586 "max_connections_per_session": 2, 00:05:33.586 "max_queue_depth": 64, 00:05:33.586 "default_time2wait": 2, 00:05:33.586 "default_time2retain": 20, 00:05:33.586 "first_burst_length": 8192, 00:05:33.586 "immediate_data": true, 00:05:33.586 "allow_duplicated_isid": false, 00:05:33.586 "error_recovery_level": 0, 00:05:33.586 "nop_timeout": 60, 00:05:33.586 "nop_in_interval": 30, 00:05:33.586 "disable_chap": false, 00:05:33.586 "require_chap": false, 00:05:33.586 "mutual_chap": false, 00:05:33.586 "chap_group": 0, 00:05:33.586 "max_large_datain_per_connection": 64, 00:05:33.586 "max_r2t_per_connection": 4, 00:05:33.586 "pdu_pool_size": 36864, 00:05:33.586 "immediate_data_pool_size": 16384, 00:05:33.586 "data_out_pool_size": 2048 00:05:33.586 } 00:05:33.586 } 00:05:33.586 ] 00:05:33.586 } 00:05:33.586 ] 00:05:33.586 } 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69534 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69534 ']' 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69534 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69534 00:05:33.586 killing process with pid 69534 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69534' 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69534 00:05:33.586 12:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69534 00:05:34.154 12:48:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.154 12:48:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69568 00:05:34.154 12:48:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69568 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69568 ']' 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69568 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69568 00:05:39.431 killing process with pid 69568 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69568' 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69568 00:05:39.431 12:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69568 00:05:40.000 12:48:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:40.000 12:48:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:40.000 ************************************ 00:05:40.000 END TEST skip_rpc_with_json 00:05:40.000 ************************************ 00:05:40.000 00:05:40.000 real 0m7.560s 00:05:40.000 user 0m6.789s 00:05:40.000 sys 0m1.093s 00:05:40.000 12:48:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.000 12:48:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.000 12:48:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:40.000 12:48:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.001 12:48:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.001 12:48:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.001 ************************************ 00:05:40.001 START TEST skip_rpc_with_delay 00:05:40.001 ************************************ 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:40.001 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.001 [2024-11-26 12:48:57.628028] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:40.001 [2024-11-26 12:48:57.628310] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:40.260 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:40.260 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.260 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.260 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.260 00:05:40.260 real 0m0.167s 00:05:40.260 user 0m0.089s 00:05:40.260 sys 0m0.076s 00:05:40.260 ************************************ 00:05:40.260 END TEST skip_rpc_with_delay 00:05:40.260 ************************************ 00:05:40.260 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.260 12:48:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:40.260 12:48:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:40.260 12:48:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:40.260 12:48:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:40.260 12:48:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.260 12:48:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.260 12:48:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.260 ************************************ 00:05:40.260 START TEST exit_on_failed_rpc_init 00:05:40.260 ************************************ 00:05:40.260 12:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:40.260 12:48:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69685 00:05:40.260 12:48:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.260 12:48:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69685 00:05:40.260 12:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69685 ']' 00:05:40.260 12:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.260 12:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.260 12:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.260 12:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.260 12:48:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.260 [2024-11-26 12:48:57.866512] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:40.260 [2024-11-26 12:48:57.866648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69685 ] 00:05:40.520 [2024-11-26 12:48:58.028854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.520 [2024-11-26 12:48:58.101289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:41.090 12:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.351 [2024-11-26 12:48:58.771762] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:41.351 [2024-11-26 12:48:58.772013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69703 ] 00:05:41.351 [2024-11-26 12:48:58.932328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.351 [2024-11-26 12:48:59.008299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.351 [2024-11-26 12:48:59.008501] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:41.351 [2024-11-26 12:48:59.008569] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:41.351 [2024-11-26 12:48:59.008651] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69685 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69685 ']' 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69685 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69685 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.612 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.613 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69685' 00:05:41.613 killing process with pid 69685 00:05:41.613 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69685 00:05:41.613 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69685 00:05:42.553 00:05:42.553 real 0m2.114s 00:05:42.553 user 0m2.125s 00:05:42.553 sys 0m0.704s 00:05:42.553 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.553 ************************************ 00:05:42.553 END TEST exit_on_failed_rpc_init 00:05:42.553 ************************************ 00:05:42.553 12:48:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.553 12:48:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.553 00:05:42.553 real 0m16.083s 00:05:42.553 user 0m14.339s 00:05:42.553 sys 0m2.711s 00:05:42.553 12:48:59 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.553 12:48:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.553 ************************************ 00:05:42.553 END TEST skip_rpc 00:05:42.553 ************************************ 00:05:42.553 12:48:59 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:42.553 12:48:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.553 12:48:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.553 12:48:59 -- common/autotest_common.sh@10 -- # set +x 00:05:42.553 ************************************ 00:05:42.553 START TEST rpc_client 00:05:42.553 ************************************ 00:05:42.553 12:49:00 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:42.553 * Looking for test storage... 00:05:42.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:42.553 12:49:00 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.553 12:49:00 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.553 12:49:00 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.553 12:49:00 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.553 12:49:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:42.554 12:49:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.813 12:49:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:42.813 12:49:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.813 12:49:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.813 12:49:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.813 12:49:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:42.813 12:49:00 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.813 12:49:00 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.813 --rc genhtml_branch_coverage=1 00:05:42.813 --rc genhtml_function_coverage=1 00:05:42.813 --rc genhtml_legend=1 00:05:42.813 --rc geninfo_all_blocks=1 00:05:42.813 --rc geninfo_unexecuted_blocks=1 00:05:42.813 00:05:42.813 ' 00:05:42.813 12:49:00 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.813 --rc genhtml_branch_coverage=1 00:05:42.813 --rc genhtml_function_coverage=1 00:05:42.813 --rc genhtml_legend=1 00:05:42.813 --rc geninfo_all_blocks=1 00:05:42.813 --rc geninfo_unexecuted_blocks=1 00:05:42.813 00:05:42.813 ' 00:05:42.813 12:49:00 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.813 --rc genhtml_branch_coverage=1 00:05:42.813 --rc genhtml_function_coverage=1 00:05:42.813 --rc genhtml_legend=1 00:05:42.813 --rc geninfo_all_blocks=1 00:05:42.813 --rc geninfo_unexecuted_blocks=1 00:05:42.813 00:05:42.813 ' 00:05:42.813 12:49:00 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.813 --rc genhtml_branch_coverage=1 00:05:42.813 --rc genhtml_function_coverage=1 00:05:42.813 --rc genhtml_legend=1 00:05:42.813 --rc geninfo_all_blocks=1 00:05:42.813 --rc geninfo_unexecuted_blocks=1 00:05:42.813 00:05:42.813 ' 00:05:42.813 12:49:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:42.813 OK 00:05:42.813 12:49:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:42.813 ************************************ 00:05:42.813 END TEST rpc_client 00:05:42.813 ************************************ 00:05:42.813 00:05:42.813 real 0m0.294s 00:05:42.813 user 0m0.159s 00:05:42.813 sys 0m0.149s 00:05:42.813 12:49:00 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.813 12:49:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:42.813 12:49:00 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:42.813 12:49:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.813 12:49:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.813 12:49:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.813 ************************************ 00:05:42.813 START TEST json_config 00:05:42.813 ************************************ 00:05:42.813 12:49:00 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:42.813 12:49:00 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.813 12:49:00 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.813 12:49:00 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:43.073 12:49:00 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:43.073 12:49:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.073 12:49:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.073 12:49:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.073 12:49:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.073 12:49:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.073 12:49:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.073 12:49:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.073 12:49:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.073 12:49:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.073 12:49:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.073 12:49:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.073 12:49:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:43.073 12:49:00 json_config -- scripts/common.sh@345 -- # : 1 00:05:43.073 12:49:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.073 12:49:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.073 12:49:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:43.073 12:49:00 json_config -- scripts/common.sh@353 -- # local d=1 00:05:43.073 12:49:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.073 12:49:00 json_config -- scripts/common.sh@355 -- # echo 1 00:05:43.073 12:49:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.073 12:49:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:43.073 12:49:00 json_config -- scripts/common.sh@353 -- # local d=2 00:05:43.073 12:49:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.073 12:49:00 json_config -- scripts/common.sh@355 -- # echo 2 00:05:43.073 12:49:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.074 12:49:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.074 12:49:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.074 12:49:00 json_config -- scripts/common.sh@368 -- # return 0 00:05:43.074 12:49:00 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.074 12:49:00 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:43.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.074 --rc genhtml_branch_coverage=1 00:05:43.074 --rc genhtml_function_coverage=1 00:05:43.074 --rc genhtml_legend=1 00:05:43.074 --rc geninfo_all_blocks=1 00:05:43.074 --rc geninfo_unexecuted_blocks=1 00:05:43.074 00:05:43.074 ' 00:05:43.074 12:49:00 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:43.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.074 --rc genhtml_branch_coverage=1 00:05:43.074 --rc genhtml_function_coverage=1 00:05:43.074 --rc genhtml_legend=1 00:05:43.074 --rc geninfo_all_blocks=1 00:05:43.074 --rc geninfo_unexecuted_blocks=1 00:05:43.074 00:05:43.074 ' 00:05:43.074 12:49:00 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:43.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.074 --rc genhtml_branch_coverage=1 00:05:43.074 --rc genhtml_function_coverage=1 00:05:43.074 --rc genhtml_legend=1 00:05:43.074 --rc geninfo_all_blocks=1 00:05:43.074 --rc geninfo_unexecuted_blocks=1 00:05:43.074 00:05:43.074 ' 00:05:43.074 12:49:00 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:43.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.074 --rc genhtml_branch_coverage=1 00:05:43.074 --rc genhtml_function_coverage=1 00:05:43.074 --rc genhtml_legend=1 00:05:43.074 --rc geninfo_all_blocks=1 00:05:43.074 --rc geninfo_unexecuted_blocks=1 00:05:43.074 00:05:43.074 ' 00:05:43.074 12:49:00 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ee4561f-c8c1-44d1-ac3c-57f4ce74092b 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1ee4561f-c8c1-44d1-ac3c-57f4ce74092b 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.074 12:49:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.074 12:49:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.074 12:49:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.074 12:49:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.074 12:49:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.074 12:49:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.074 12:49:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.074 12:49:00 json_config -- paths/export.sh@5 -- # export PATH 00:05:43.074 12:49:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@51 -- # : 0 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.074 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.074 12:49:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.074 12:49:00 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:43.074 12:49:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:43.074 12:49:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:43.074 12:49:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:43.074 12:49:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:43.074 12:49:00 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:43.074 WARNING: No tests are enabled so not running JSON configuration tests 00:05:43.074 12:49:00 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:43.074 00:05:43.074 real 0m0.236s 00:05:43.074 user 0m0.137s 00:05:43.074 sys 0m0.103s 00:05:43.074 12:49:00 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.074 12:49:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.074 ************************************ 00:05:43.074 END TEST json_config 00:05:43.074 ************************************ 00:05:43.074 12:49:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:43.074 12:49:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.074 12:49:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.074 12:49:00 -- common/autotest_common.sh@10 -- # set +x 00:05:43.074 ************************************ 00:05:43.074 START TEST json_config_extra_key 00:05:43.074 ************************************ 00:05:43.074 12:49:00 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:43.334 12:49:00 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:43.334 12:49:00 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:43.334 12:49:00 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:43.334 12:49:00 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:43.334 12:49:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.334 12:49:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:43.335 12:49:00 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.335 12:49:00 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.335 --rc genhtml_branch_coverage=1 00:05:43.335 --rc genhtml_function_coverage=1 00:05:43.335 --rc genhtml_legend=1 00:05:43.335 --rc geninfo_all_blocks=1 00:05:43.335 --rc geninfo_unexecuted_blocks=1 00:05:43.335 00:05:43.335 ' 00:05:43.335 12:49:00 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.335 --rc genhtml_branch_coverage=1 00:05:43.335 --rc genhtml_function_coverage=1 00:05:43.335 --rc genhtml_legend=1 00:05:43.335 --rc geninfo_all_blocks=1 00:05:43.335 --rc geninfo_unexecuted_blocks=1 00:05:43.335 00:05:43.335 ' 00:05:43.335 12:49:00 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.335 --rc genhtml_branch_coverage=1 00:05:43.335 --rc genhtml_function_coverage=1 00:05:43.335 --rc genhtml_legend=1 00:05:43.335 --rc geninfo_all_blocks=1 00:05:43.335 --rc geninfo_unexecuted_blocks=1 00:05:43.335 00:05:43.335 ' 00:05:43.335 12:49:00 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.335 --rc genhtml_branch_coverage=1 00:05:43.335 --rc genhtml_function_coverage=1 00:05:43.335 --rc genhtml_legend=1 00:05:43.335 --rc geninfo_all_blocks=1 00:05:43.335 --rc geninfo_unexecuted_blocks=1 00:05:43.335 00:05:43.335 ' 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ee4561f-c8c1-44d1-ac3c-57f4ce74092b 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1ee4561f-c8c1-44d1-ac3c-57f4ce74092b 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.335 12:49:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.335 12:49:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.335 12:49:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.335 12:49:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.335 12:49:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:43.335 12:49:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.335 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.335 12:49:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.335 INFO: launching applications... 00:05:43.335 12:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:43.335 12:49:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:43.335 12:49:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:43.335 12:49:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.335 12:49:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.335 12:49:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.335 12:49:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.335 12:49:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.335 12:49:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69891 00:05:43.335 12:49:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.335 Waiting for target to run... 00:05:43.335 12:49:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69891 /var/tmp/spdk_tgt.sock 00:05:43.336 12:49:00 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69891 ']' 00:05:43.336 12:49:00 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:43.336 12:49:00 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.336 12:49:00 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.336 12:49:00 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.336 12:49:00 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.336 12:49:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.336 [2024-11-26 12:49:00.994476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:43.336 [2024-11-26 12:49:00.994697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69891 ] 00:05:43.931 [2024-11-26 12:49:01.369415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.931 [2024-11-26 12:49:01.411693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.202 12:49:01 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.202 12:49:01 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:44.202 12:49:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:44.202 00:05:44.202 INFO: shutting down applications... 00:05:44.202 12:49:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.202 12:49:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.202 12:49:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:44.202 12:49:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.202 12:49:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69891 ]] 00:05:44.202 12:49:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69891 00:05:44.202 12:49:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.202 12:49:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.202 12:49:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69891 00:05:44.202 12:49:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.772 12:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.772 12:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.772 12:49:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69891 00:05:44.772 12:49:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.342 12:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.342 12:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.342 12:49:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69891 00:05:45.342 12:49:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:45.342 12:49:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:45.342 12:49:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:45.342 12:49:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:45.342 SPDK target shutdown done 00:05:45.342 12:49:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:45.342 Success 00:05:45.342 00:05:45.342 real 0m2.174s 00:05:45.342 user 0m1.654s 00:05:45.342 sys 0m0.511s 00:05:45.342 12:49:02 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.342 ************************************ 00:05:45.342 END TEST json_config_extra_key 00:05:45.342 ************************************ 00:05:45.342 12:49:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.342 12:49:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.342 12:49:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.342 12:49:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.342 12:49:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.342 ************************************ 00:05:45.342 START TEST alias_rpc 00:05:45.342 ************************************ 00:05:45.342 12:49:02 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.603 * Looking for test storage... 00:05:45.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:45.603 12:49:03 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:45.603 12:49:03 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:45.603 12:49:03 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:45.603 12:49:03 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.603 12:49:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:45.603 12:49:03 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.603 12:49:03 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:45.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.603 --rc genhtml_branch_coverage=1 00:05:45.603 --rc genhtml_function_coverage=1 00:05:45.603 --rc genhtml_legend=1 00:05:45.603 --rc geninfo_all_blocks=1 00:05:45.603 --rc geninfo_unexecuted_blocks=1 00:05:45.603 00:05:45.603 ' 00:05:45.603 12:49:03 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:45.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.603 --rc genhtml_branch_coverage=1 00:05:45.603 --rc genhtml_function_coverage=1 00:05:45.603 --rc genhtml_legend=1 00:05:45.603 --rc geninfo_all_blocks=1 00:05:45.603 --rc geninfo_unexecuted_blocks=1 00:05:45.603 00:05:45.603 ' 00:05:45.603 12:49:03 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:45.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.603 --rc genhtml_branch_coverage=1 00:05:45.603 --rc genhtml_function_coverage=1 00:05:45.603 --rc genhtml_legend=1 00:05:45.603 --rc geninfo_all_blocks=1 00:05:45.603 --rc geninfo_unexecuted_blocks=1 00:05:45.603 00:05:45.603 ' 00:05:45.603 12:49:03 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:45.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.603 --rc genhtml_branch_coverage=1 00:05:45.603 --rc genhtml_function_coverage=1 00:05:45.603 --rc genhtml_legend=1 00:05:45.603 --rc geninfo_all_blocks=1 00:05:45.603 --rc geninfo_unexecuted_blocks=1 00:05:45.603 00:05:45.603 ' 00:05:45.604 12:49:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:45.604 12:49:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69971 00:05:45.604 12:49:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.604 12:49:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69971 00:05:45.604 12:49:03 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69971 ']' 00:05:45.604 12:49:03 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.604 12:49:03 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.604 12:49:03 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.604 12:49:03 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.604 12:49:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.604 [2024-11-26 12:49:03.226320] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:45.604 [2024-11-26 12:49:03.226512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69971 ] 00:05:45.863 [2024-11-26 12:49:03.382915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.863 [2024-11-26 12:49:03.451891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.433 12:49:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.433 12:49:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.433 12:49:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:46.690 12:49:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69971 00:05:46.690 12:49:04 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69971 ']' 00:05:46.690 12:49:04 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69971 00:05:46.690 12:49:04 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:46.690 12:49:04 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.690 12:49:04 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69971 00:05:46.690 killing process with pid 69971 00:05:46.690 12:49:04 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.690 12:49:04 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.690 12:49:04 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69971' 00:05:46.690 12:49:04 alias_rpc -- common/autotest_common.sh@969 -- # kill 69971 00:05:46.690 12:49:04 alias_rpc -- common/autotest_common.sh@974 -- # wait 69971 00:05:47.627 ************************************ 00:05:47.627 END TEST alias_rpc 00:05:47.627 ************************************ 00:05:47.627 00:05:47.627 real 0m2.028s 00:05:47.627 user 0m1.852s 00:05:47.627 sys 0m0.687s 00:05:47.627 12:49:04 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.627 12:49:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.627 12:49:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:47.627 12:49:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:47.627 12:49:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.627 12:49:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.627 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:05:47.627 ************************************ 00:05:47.627 START TEST spdkcli_tcp 00:05:47.627 ************************************ 00:05:47.627 12:49:05 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:47.627 * Looking for test storage... 00:05:47.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:47.627 12:49:05 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.628 12:49:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:47.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.628 --rc genhtml_branch_coverage=1 00:05:47.628 --rc genhtml_function_coverage=1 00:05:47.628 --rc genhtml_legend=1 00:05:47.628 --rc geninfo_all_blocks=1 00:05:47.628 --rc geninfo_unexecuted_blocks=1 00:05:47.628 00:05:47.628 ' 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:47.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.628 --rc genhtml_branch_coverage=1 00:05:47.628 --rc genhtml_function_coverage=1 00:05:47.628 --rc genhtml_legend=1 00:05:47.628 --rc geninfo_all_blocks=1 00:05:47.628 --rc geninfo_unexecuted_blocks=1 00:05:47.628 00:05:47.628 ' 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:47.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.628 --rc genhtml_branch_coverage=1 00:05:47.628 --rc genhtml_function_coverage=1 00:05:47.628 --rc genhtml_legend=1 00:05:47.628 --rc geninfo_all_blocks=1 00:05:47.628 --rc geninfo_unexecuted_blocks=1 00:05:47.628 00:05:47.628 ' 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:47.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.628 --rc genhtml_branch_coverage=1 00:05:47.628 --rc genhtml_function_coverage=1 00:05:47.628 --rc genhtml_legend=1 00:05:47.628 --rc geninfo_all_blocks=1 00:05:47.628 --rc geninfo_unexecuted_blocks=1 00:05:47.628 00:05:47.628 ' 00:05:47.628 12:49:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:47.628 12:49:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:47.628 12:49:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:47.628 12:49:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:47.628 12:49:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:47.628 12:49:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:47.628 12:49:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.628 12:49:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70056 00:05:47.628 12:49:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:47.628 12:49:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70056 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70056 ']' 00:05:47.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.628 12:49:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.888 [2024-11-26 12:49:05.340540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:47.888 [2024-11-26 12:49:05.340658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70056 ] 00:05:47.888 [2024-11-26 12:49:05.501046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.148 [2024-11-26 12:49:05.572881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.148 [2024-11-26 12:49:05.573015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.719 12:49:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.719 12:49:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:48.719 12:49:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70073 00:05:48.719 12:49:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:48.719 12:49:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:48.719 [ 00:05:48.719 "bdev_malloc_delete", 00:05:48.719 "bdev_malloc_create", 00:05:48.719 "bdev_null_resize", 00:05:48.719 "bdev_null_delete", 00:05:48.719 "bdev_null_create", 00:05:48.719 "bdev_nvme_cuse_unregister", 00:05:48.719 "bdev_nvme_cuse_register", 00:05:48.719 "bdev_opal_new_user", 00:05:48.719 "bdev_opal_set_lock_state", 00:05:48.719 "bdev_opal_delete", 00:05:48.719 "bdev_opal_get_info", 00:05:48.719 "bdev_opal_create", 00:05:48.719 "bdev_nvme_opal_revert", 00:05:48.719 "bdev_nvme_opal_init", 00:05:48.719 "bdev_nvme_send_cmd", 00:05:48.719 "bdev_nvme_set_keys", 00:05:48.719 "bdev_nvme_get_path_iostat", 00:05:48.719 "bdev_nvme_get_mdns_discovery_info", 00:05:48.719 "bdev_nvme_stop_mdns_discovery", 00:05:48.719 "bdev_nvme_start_mdns_discovery", 00:05:48.719 "bdev_nvme_set_multipath_policy", 00:05:48.719 "bdev_nvme_set_preferred_path", 00:05:48.719 "bdev_nvme_get_io_paths", 00:05:48.719 "bdev_nvme_remove_error_injection", 00:05:48.719 "bdev_nvme_add_error_injection", 00:05:48.719 "bdev_nvme_get_discovery_info", 00:05:48.719 "bdev_nvme_stop_discovery", 00:05:48.719 "bdev_nvme_start_discovery", 00:05:48.719 "bdev_nvme_get_controller_health_info", 00:05:48.719 "bdev_nvme_disable_controller", 00:05:48.719 "bdev_nvme_enable_controller", 00:05:48.719 "bdev_nvme_reset_controller", 00:05:48.719 "bdev_nvme_get_transport_statistics", 00:05:48.719 "bdev_nvme_apply_firmware", 00:05:48.719 "bdev_nvme_detach_controller", 00:05:48.719 "bdev_nvme_get_controllers", 00:05:48.719 "bdev_nvme_attach_controller", 00:05:48.719 "bdev_nvme_set_hotplug", 00:05:48.719 "bdev_nvme_set_options", 00:05:48.719 "bdev_passthru_delete", 00:05:48.719 "bdev_passthru_create", 00:05:48.719 "bdev_lvol_set_parent_bdev", 00:05:48.719 "bdev_lvol_set_parent", 00:05:48.719 "bdev_lvol_check_shallow_copy", 00:05:48.719 "bdev_lvol_start_shallow_copy", 00:05:48.719 "bdev_lvol_grow_lvstore", 00:05:48.719 "bdev_lvol_get_lvols", 00:05:48.719 "bdev_lvol_get_lvstores", 00:05:48.719 "bdev_lvol_delete", 00:05:48.719 "bdev_lvol_set_read_only", 00:05:48.719 "bdev_lvol_resize", 00:05:48.719 "bdev_lvol_decouple_parent", 00:05:48.719 "bdev_lvol_inflate", 00:05:48.719 "bdev_lvol_rename", 00:05:48.719 "bdev_lvol_clone_bdev", 00:05:48.719 "bdev_lvol_clone", 00:05:48.719 "bdev_lvol_snapshot", 00:05:48.719 "bdev_lvol_create", 00:05:48.719 "bdev_lvol_delete_lvstore", 00:05:48.719 "bdev_lvol_rename_lvstore", 00:05:48.719 "bdev_lvol_create_lvstore", 00:05:48.719 "bdev_raid_set_options", 00:05:48.719 "bdev_raid_remove_base_bdev", 00:05:48.719 "bdev_raid_add_base_bdev", 00:05:48.719 "bdev_raid_delete", 00:05:48.719 "bdev_raid_create", 00:05:48.719 "bdev_raid_get_bdevs", 00:05:48.719 "bdev_error_inject_error", 00:05:48.719 "bdev_error_delete", 00:05:48.719 "bdev_error_create", 00:05:48.719 "bdev_split_delete", 00:05:48.719 "bdev_split_create", 00:05:48.719 "bdev_delay_delete", 00:05:48.719 "bdev_delay_create", 00:05:48.719 "bdev_delay_update_latency", 00:05:48.719 "bdev_zone_block_delete", 00:05:48.719 "bdev_zone_block_create", 00:05:48.719 "blobfs_create", 00:05:48.719 "blobfs_detect", 00:05:48.719 "blobfs_set_cache_size", 00:05:48.719 "bdev_aio_delete", 00:05:48.719 "bdev_aio_rescan", 00:05:48.720 "bdev_aio_create", 00:05:48.720 "bdev_ftl_set_property", 00:05:48.720 "bdev_ftl_get_properties", 00:05:48.720 "bdev_ftl_get_stats", 00:05:48.720 "bdev_ftl_unmap", 00:05:48.720 "bdev_ftl_unload", 00:05:48.720 "bdev_ftl_delete", 00:05:48.720 "bdev_ftl_load", 00:05:48.720 "bdev_ftl_create", 00:05:48.720 "bdev_virtio_attach_controller", 00:05:48.720 "bdev_virtio_scsi_get_devices", 00:05:48.720 "bdev_virtio_detach_controller", 00:05:48.720 "bdev_virtio_blk_set_hotplug", 00:05:48.720 "bdev_iscsi_delete", 00:05:48.720 "bdev_iscsi_create", 00:05:48.720 "bdev_iscsi_set_options", 00:05:48.720 "accel_error_inject_error", 00:05:48.720 "ioat_scan_accel_module", 00:05:48.720 "dsa_scan_accel_module", 00:05:48.720 "iaa_scan_accel_module", 00:05:48.720 "keyring_file_remove_key", 00:05:48.720 "keyring_file_add_key", 00:05:48.720 "keyring_linux_set_options", 00:05:48.720 "fsdev_aio_delete", 00:05:48.720 "fsdev_aio_create", 00:05:48.720 "iscsi_get_histogram", 00:05:48.720 "iscsi_enable_histogram", 00:05:48.720 "iscsi_set_options", 00:05:48.720 "iscsi_get_auth_groups", 00:05:48.720 "iscsi_auth_group_remove_secret", 00:05:48.720 "iscsi_auth_group_add_secret", 00:05:48.720 "iscsi_delete_auth_group", 00:05:48.720 "iscsi_create_auth_group", 00:05:48.720 "iscsi_set_discovery_auth", 00:05:48.720 "iscsi_get_options", 00:05:48.720 "iscsi_target_node_request_logout", 00:05:48.720 "iscsi_target_node_set_redirect", 00:05:48.720 "iscsi_target_node_set_auth", 00:05:48.720 "iscsi_target_node_add_lun", 00:05:48.720 "iscsi_get_stats", 00:05:48.720 "iscsi_get_connections", 00:05:48.720 "iscsi_portal_group_set_auth", 00:05:48.720 "iscsi_start_portal_group", 00:05:48.720 "iscsi_delete_portal_group", 00:05:48.720 "iscsi_create_portal_group", 00:05:48.720 "iscsi_get_portal_groups", 00:05:48.720 "iscsi_delete_target_node", 00:05:48.720 "iscsi_target_node_remove_pg_ig_maps", 00:05:48.720 "iscsi_target_node_add_pg_ig_maps", 00:05:48.720 "iscsi_create_target_node", 00:05:48.720 "iscsi_get_target_nodes", 00:05:48.720 "iscsi_delete_initiator_group", 00:05:48.720 "iscsi_initiator_group_remove_initiators", 00:05:48.720 "iscsi_initiator_group_add_initiators", 00:05:48.720 "iscsi_create_initiator_group", 00:05:48.720 "iscsi_get_initiator_groups", 00:05:48.720 "nvmf_set_crdt", 00:05:48.720 "nvmf_set_config", 00:05:48.720 "nvmf_set_max_subsystems", 00:05:48.720 "nvmf_stop_mdns_prr", 00:05:48.720 "nvmf_publish_mdns_prr", 00:05:48.720 "nvmf_subsystem_get_listeners", 00:05:48.720 "nvmf_subsystem_get_qpairs", 00:05:48.720 "nvmf_subsystem_get_controllers", 00:05:48.720 "nvmf_get_stats", 00:05:48.720 "nvmf_get_transports", 00:05:48.720 "nvmf_create_transport", 00:05:48.720 "nvmf_get_targets", 00:05:48.720 "nvmf_delete_target", 00:05:48.720 "nvmf_create_target", 00:05:48.720 "nvmf_subsystem_allow_any_host", 00:05:48.720 "nvmf_subsystem_set_keys", 00:05:48.720 "nvmf_subsystem_remove_host", 00:05:48.720 "nvmf_subsystem_add_host", 00:05:48.720 "nvmf_ns_remove_host", 00:05:48.720 "nvmf_ns_add_host", 00:05:48.720 "nvmf_subsystem_remove_ns", 00:05:48.720 "nvmf_subsystem_set_ns_ana_group", 00:05:48.720 "nvmf_subsystem_add_ns", 00:05:48.720 "nvmf_subsystem_listener_set_ana_state", 00:05:48.720 "nvmf_discovery_get_referrals", 00:05:48.720 "nvmf_discovery_remove_referral", 00:05:48.720 "nvmf_discovery_add_referral", 00:05:48.720 "nvmf_subsystem_remove_listener", 00:05:48.720 "nvmf_subsystem_add_listener", 00:05:48.720 "nvmf_delete_subsystem", 00:05:48.720 "nvmf_create_subsystem", 00:05:48.720 "nvmf_get_subsystems", 00:05:48.720 "env_dpdk_get_mem_stats", 00:05:48.720 "nbd_get_disks", 00:05:48.720 "nbd_stop_disk", 00:05:48.720 "nbd_start_disk", 00:05:48.720 "ublk_recover_disk", 00:05:48.720 "ublk_get_disks", 00:05:48.720 "ublk_stop_disk", 00:05:48.720 "ublk_start_disk", 00:05:48.720 "ublk_destroy_target", 00:05:48.720 "ublk_create_target", 00:05:48.720 "virtio_blk_create_transport", 00:05:48.720 "virtio_blk_get_transports", 00:05:48.720 "vhost_controller_set_coalescing", 00:05:48.720 "vhost_get_controllers", 00:05:48.720 "vhost_delete_controller", 00:05:48.720 "vhost_create_blk_controller", 00:05:48.720 "vhost_scsi_controller_remove_target", 00:05:48.720 "vhost_scsi_controller_add_target", 00:05:48.720 "vhost_start_scsi_controller", 00:05:48.720 "vhost_create_scsi_controller", 00:05:48.720 "thread_set_cpumask", 00:05:48.720 "scheduler_set_options", 00:05:48.720 "framework_get_governor", 00:05:48.720 "framework_get_scheduler", 00:05:48.720 "framework_set_scheduler", 00:05:48.720 "framework_get_reactors", 00:05:48.720 "thread_get_io_channels", 00:05:48.720 "thread_get_pollers", 00:05:48.720 "thread_get_stats", 00:05:48.720 "framework_monitor_context_switch", 00:05:48.720 "spdk_kill_instance", 00:05:48.720 "log_enable_timestamps", 00:05:48.720 "log_get_flags", 00:05:48.720 "log_clear_flag", 00:05:48.720 "log_set_flag", 00:05:48.720 "log_get_level", 00:05:48.720 "log_set_level", 00:05:48.720 "log_get_print_level", 00:05:48.720 "log_set_print_level", 00:05:48.720 "framework_enable_cpumask_locks", 00:05:48.720 "framework_disable_cpumask_locks", 00:05:48.720 "framework_wait_init", 00:05:48.720 "framework_start_init", 00:05:48.720 "scsi_get_devices", 00:05:48.720 "bdev_get_histogram", 00:05:48.720 "bdev_enable_histogram", 00:05:48.720 "bdev_set_qos_limit", 00:05:48.720 "bdev_set_qd_sampling_period", 00:05:48.720 "bdev_get_bdevs", 00:05:48.720 "bdev_reset_iostat", 00:05:48.720 "bdev_get_iostat", 00:05:48.720 "bdev_examine", 00:05:48.720 "bdev_wait_for_examine", 00:05:48.720 "bdev_set_options", 00:05:48.720 "accel_get_stats", 00:05:48.720 "accel_set_options", 00:05:48.720 "accel_set_driver", 00:05:48.720 "accel_crypto_key_destroy", 00:05:48.720 "accel_crypto_keys_get", 00:05:48.720 "accel_crypto_key_create", 00:05:48.720 "accel_assign_opc", 00:05:48.720 "accel_get_module_info", 00:05:48.720 "accel_get_opc_assignments", 00:05:48.720 "vmd_rescan", 00:05:48.720 "vmd_remove_device", 00:05:48.720 "vmd_enable", 00:05:48.720 "sock_get_default_impl", 00:05:48.720 "sock_set_default_impl", 00:05:48.720 "sock_impl_set_options", 00:05:48.720 "sock_impl_get_options", 00:05:48.720 "iobuf_get_stats", 00:05:48.720 "iobuf_set_options", 00:05:48.720 "keyring_get_keys", 00:05:48.720 "framework_get_pci_devices", 00:05:48.720 "framework_get_config", 00:05:48.720 "framework_get_subsystems", 00:05:48.720 "fsdev_set_opts", 00:05:48.720 "fsdev_get_opts", 00:05:48.720 "trace_get_info", 00:05:48.720 "trace_get_tpoint_group_mask", 00:05:48.720 "trace_disable_tpoint_group", 00:05:48.720 "trace_enable_tpoint_group", 00:05:48.720 "trace_clear_tpoint_mask", 00:05:48.720 "trace_set_tpoint_mask", 00:05:48.720 "notify_get_notifications", 00:05:48.720 "notify_get_types", 00:05:48.720 "spdk_get_version", 00:05:48.720 "rpc_get_methods" 00:05:48.720 ] 00:05:48.720 12:49:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:48.720 12:49:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.720 12:49:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.980 12:49:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:48.980 12:49:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70056 00:05:48.980 12:49:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70056 ']' 00:05:48.980 12:49:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70056 00:05:48.980 12:49:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:48.980 12:49:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.980 12:49:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70056 00:05:48.980 killing process with pid 70056 00:05:48.980 12:49:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.980 12:49:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.980 12:49:06 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70056' 00:05:48.980 12:49:06 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70056 00:05:48.980 12:49:06 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70056 00:05:49.577 00:05:49.577 real 0m2.091s 00:05:49.577 user 0m3.299s 00:05:49.577 sys 0m0.722s 00:05:49.577 12:49:07 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.577 12:49:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.577 ************************************ 00:05:49.577 END TEST spdkcli_tcp 00:05:49.577 ************************************ 00:05:49.577 12:49:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.577 12:49:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.577 12:49:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.577 12:49:07 -- common/autotest_common.sh@10 -- # set +x 00:05:49.577 ************************************ 00:05:49.577 START TEST dpdk_mem_utility 00:05:49.577 ************************************ 00:05:49.577 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.838 * Looking for test storage... 00:05:49.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.838 12:49:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.838 --rc genhtml_branch_coverage=1 00:05:49.838 --rc genhtml_function_coverage=1 00:05:49.838 --rc genhtml_legend=1 00:05:49.838 --rc geninfo_all_blocks=1 00:05:49.838 --rc geninfo_unexecuted_blocks=1 00:05:49.838 00:05:49.838 ' 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.838 --rc genhtml_branch_coverage=1 00:05:49.838 --rc genhtml_function_coverage=1 00:05:49.838 --rc genhtml_legend=1 00:05:49.838 --rc geninfo_all_blocks=1 00:05:49.838 --rc geninfo_unexecuted_blocks=1 00:05:49.838 00:05:49.838 ' 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.838 --rc genhtml_branch_coverage=1 00:05:49.838 --rc genhtml_function_coverage=1 00:05:49.838 --rc genhtml_legend=1 00:05:49.838 --rc geninfo_all_blocks=1 00:05:49.838 --rc geninfo_unexecuted_blocks=1 00:05:49.838 00:05:49.838 ' 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.838 --rc genhtml_branch_coverage=1 00:05:49.838 --rc genhtml_function_coverage=1 00:05:49.838 --rc genhtml_legend=1 00:05:49.838 --rc geninfo_all_blocks=1 00:05:49.838 --rc geninfo_unexecuted_blocks=1 00:05:49.838 00:05:49.838 ' 00:05:49.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.838 12:49:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:49.838 12:49:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70156 00:05:49.838 12:49:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.838 12:49:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70156 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70156 ']' 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.838 12:49:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.838 [2024-11-26 12:49:07.480551] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:49.838 [2024-11-26 12:49:07.480768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70156 ] 00:05:50.099 [2024-11-26 12:49:07.641446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.099 [2024-11-26 12:49:07.712736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.671 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.671 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:50.671 12:49:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:50.671 12:49:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:50.671 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.671 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.671 { 00:05:50.671 "filename": "/tmp/spdk_mem_dump.txt" 00:05:50.671 } 00:05:50.671 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.671 12:49:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:50.934 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:50.934 1 heaps totaling size 860.000000 MiB 00:05:50.934 size: 860.000000 MiB heap id: 0 00:05:50.934 end heaps---------- 00:05:50.934 9 mempools totaling size 642.649841 MiB 00:05:50.934 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:50.934 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:50.934 size: 92.545471 MiB name: bdev_io_70156 00:05:50.934 size: 51.011292 MiB name: evtpool_70156 00:05:50.934 size: 50.003479 MiB name: msgpool_70156 00:05:50.934 size: 36.509338 MiB name: fsdev_io_70156 00:05:50.934 size: 21.763794 MiB name: PDU_Pool 00:05:50.934 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:50.934 size: 0.026123 MiB name: Session_Pool 00:05:50.934 end mempools------- 00:05:50.934 6 memzones totaling size 4.142822 MiB 00:05:50.934 size: 1.000366 MiB name: RG_ring_0_70156 00:05:50.934 size: 1.000366 MiB name: RG_ring_1_70156 00:05:50.934 size: 1.000366 MiB name: RG_ring_4_70156 00:05:50.934 size: 1.000366 MiB name: RG_ring_5_70156 00:05:50.934 size: 0.125366 MiB name: RG_ring_2_70156 00:05:50.934 size: 0.015991 MiB name: RG_ring_3_70156 00:05:50.934 end memzones------- 00:05:50.934 12:49:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:50.934 heap id: 0 total size: 860.000000 MiB number of busy elements: 307 number of free elements: 16 00:05:50.934 list of free elements. size: 13.936523 MiB 00:05:50.934 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:50.934 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:50.934 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:50.934 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:50.934 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:50.934 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:50.934 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:50.934 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:50.934 element at address: 0x200000200000 with size: 0.834839 MiB 00:05:50.934 element at address: 0x20001d800000 with size: 0.568237 MiB 00:05:50.934 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:50.934 element at address: 0x200003e00000 with size: 0.487915 MiB 00:05:50.934 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:50.934 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:50.934 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:50.935 element at address: 0x200003a00000 with size: 0.353027 MiB 00:05:50.935 list of standard malloc elements. size: 199.266785 MiB 00:05:50.935 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:50.935 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:50.935 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:50.935 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:50.935 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:50.935 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:50.935 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:50.935 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:50.935 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:50.935 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:50.935 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:50.935 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:50.936 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:50.936 list of memzone associated elements. size: 646.796692 MiB 00:05:50.936 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:50.937 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:50.937 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:50.937 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:50.937 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:50.937 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70156_0 00:05:50.937 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:50.937 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70156_0 00:05:50.937 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:50.937 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70156_0 00:05:50.937 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:50.937 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70156_0 00:05:50.937 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:50.937 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:50.937 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:50.937 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:50.937 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:50.937 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70156 00:05:50.937 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:50.937 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70156 00:05:50.937 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:50.937 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70156 00:05:50.937 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:50.937 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:50.937 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:50.937 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:50.937 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:50.937 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:50.937 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:50.937 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:50.937 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:50.937 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70156 00:05:50.937 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:50.937 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70156 00:05:50.937 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:50.937 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70156 00:05:50.937 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:50.937 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70156 00:05:50.937 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:50.937 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70156 00:05:50.937 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:50.937 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70156 00:05:50.937 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:50.937 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:50.937 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:50.937 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:50.937 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:50.937 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:50.937 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:05:50.937 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70156 00:05:50.937 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:50.937 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:50.937 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:50.937 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:50.937 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:05:50.937 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70156 00:05:50.937 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:50.937 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:50.937 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:50.937 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70156 00:05:50.937 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:50.937 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70156 00:05:50.937 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:05:50.937 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70156 00:05:50.937 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:50.937 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:50.937 12:49:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:50.937 12:49:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70156 00:05:50.937 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70156 ']' 00:05:50.937 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70156 00:05:50.937 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:50.937 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.937 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70156 00:05:50.937 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.937 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.937 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70156' 00:05:50.937 killing process with pid 70156 00:05:50.937 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70156 00:05:50.937 12:49:08 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70156 00:05:51.507 00:05:51.507 real 0m1.942s 00:05:51.507 user 0m1.694s 00:05:51.507 sys 0m0.686s 00:05:51.507 12:49:09 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.507 12:49:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.507 ************************************ 00:05:51.507 END TEST dpdk_mem_utility 00:05:51.507 ************************************ 00:05:51.507 12:49:09 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.507 12:49:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.507 12:49:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.507 12:49:09 -- common/autotest_common.sh@10 -- # set +x 00:05:51.507 ************************************ 00:05:51.507 START TEST event 00:05:51.507 ************************************ 00:05:51.507 12:49:09 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.766 * Looking for test storage... 00:05:51.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.766 12:49:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.766 12:49:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.766 12:49:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.766 12:49:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.766 12:49:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.766 12:49:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.766 12:49:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.766 12:49:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.766 12:49:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.766 12:49:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.766 12:49:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.766 12:49:09 event -- scripts/common.sh@344 -- # case "$op" in 00:05:51.766 12:49:09 event -- scripts/common.sh@345 -- # : 1 00:05:51.766 12:49:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.766 12:49:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.766 12:49:09 event -- scripts/common.sh@365 -- # decimal 1 00:05:51.766 12:49:09 event -- scripts/common.sh@353 -- # local d=1 00:05:51.766 12:49:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.766 12:49:09 event -- scripts/common.sh@355 -- # echo 1 00:05:51.766 12:49:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.766 12:49:09 event -- scripts/common.sh@366 -- # decimal 2 00:05:51.766 12:49:09 event -- scripts/common.sh@353 -- # local d=2 00:05:51.766 12:49:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.766 12:49:09 event -- scripts/common.sh@355 -- # echo 2 00:05:51.766 12:49:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.766 12:49:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.766 12:49:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.766 12:49:09 event -- scripts/common.sh@368 -- # return 0 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.766 --rc genhtml_branch_coverage=1 00:05:51.766 --rc genhtml_function_coverage=1 00:05:51.766 --rc genhtml_legend=1 00:05:51.766 --rc geninfo_all_blocks=1 00:05:51.766 --rc geninfo_unexecuted_blocks=1 00:05:51.766 00:05:51.766 ' 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.766 --rc genhtml_branch_coverage=1 00:05:51.766 --rc genhtml_function_coverage=1 00:05:51.766 --rc genhtml_legend=1 00:05:51.766 --rc geninfo_all_blocks=1 00:05:51.766 --rc geninfo_unexecuted_blocks=1 00:05:51.766 00:05:51.766 ' 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.766 --rc genhtml_branch_coverage=1 00:05:51.766 --rc genhtml_function_coverage=1 00:05:51.766 --rc genhtml_legend=1 00:05:51.766 --rc geninfo_all_blocks=1 00:05:51.766 --rc geninfo_unexecuted_blocks=1 00:05:51.766 00:05:51.766 ' 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.766 --rc genhtml_branch_coverage=1 00:05:51.766 --rc genhtml_function_coverage=1 00:05:51.766 --rc genhtml_legend=1 00:05:51.766 --rc geninfo_all_blocks=1 00:05:51.766 --rc geninfo_unexecuted_blocks=1 00:05:51.766 00:05:51.766 ' 00:05:51.766 12:49:09 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:51.766 12:49:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.766 12:49:09 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:51.766 12:49:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.766 12:49:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.766 ************************************ 00:05:51.766 START TEST event_perf 00:05:51.766 ************************************ 00:05:51.766 12:49:09 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.025 Running I/O for 1 seconds...[2024-11-26 12:49:09.461771] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:52.025 [2024-11-26 12:49:09.461958] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70242 ] 00:05:52.025 [2024-11-26 12:49:09.622895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.025 [2024-11-26 12:49:09.696455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.025 [2024-11-26 12:49:09.696754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.025 [2024-11-26 12:49:09.696644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.025 [2024-11-26 12:49:09.696931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.408 Running I/O for 1 seconds... 00:05:53.408 lcore 0: 81681 00:05:53.408 lcore 1: 81683 00:05:53.408 lcore 2: 81687 00:05:53.408 lcore 3: 81687 00:05:53.408 done. 00:05:53.408 00:05:53.408 real 0m1.419s 00:05:53.408 user 0m4.163s 00:05:53.408 sys 0m0.133s 00:05:53.408 12:49:10 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.408 12:49:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.408 ************************************ 00:05:53.408 END TEST event_perf 00:05:53.408 ************************************ 00:05:53.408 12:49:10 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:53.408 12:49:10 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:53.408 12:49:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.408 12:49:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.408 ************************************ 00:05:53.408 START TEST event_reactor 00:05:53.408 ************************************ 00:05:53.408 12:49:10 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:53.408 [2024-11-26 12:49:10.956330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:53.408 [2024-11-26 12:49:10.956895] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70277 ] 00:05:53.668 [2024-11-26 12:49:11.118086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.668 [2024-11-26 12:49:11.186755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.048 test_start 00:05:55.048 oneshot 00:05:55.048 tick 100 00:05:55.048 tick 100 00:05:55.048 tick 250 00:05:55.048 tick 100 00:05:55.048 tick 100 00:05:55.048 tick 100 00:05:55.048 tick 250 00:05:55.048 tick 500 00:05:55.048 tick 100 00:05:55.048 tick 100 00:05:55.048 tick 250 00:05:55.048 tick 100 00:05:55.048 tick 100 00:05:55.048 test_end 00:05:55.048 00:05:55.048 real 0m1.412s 00:05:55.049 user 0m1.177s 00:05:55.049 sys 0m0.127s 00:05:55.049 12:49:12 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.049 12:49:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:55.049 ************************************ 00:05:55.049 END TEST event_reactor 00:05:55.049 ************************************ 00:05:55.049 12:49:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.049 12:49:12 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:55.049 12:49:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.049 12:49:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.049 ************************************ 00:05:55.049 START TEST event_reactor_perf 00:05:55.049 ************************************ 00:05:55.049 12:49:12 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.049 [2024-11-26 12:49:12.432137] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:55.049 [2024-11-26 12:49:12.432277] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70318 ] 00:05:55.049 [2024-11-26 12:49:12.593027] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.049 [2024-11-26 12:49:12.659062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.429 test_start 00:05:56.429 test_end 00:05:56.429 Performance: 416463 events per second 00:05:56.429 00:05:56.429 real 0m1.401s 00:05:56.429 user 0m1.183s 00:05:56.429 sys 0m0.111s 00:05:56.429 12:49:13 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.429 12:49:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.429 ************************************ 00:05:56.429 END TEST event_reactor_perf 00:05:56.429 ************************************ 00:05:56.429 12:49:13 event -- event/event.sh@49 -- # uname -s 00:05:56.429 12:49:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:56.429 12:49:13 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:56.429 12:49:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.429 12:49:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.429 12:49:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.429 ************************************ 00:05:56.429 START TEST event_scheduler 00:05:56.429 ************************************ 00:05:56.429 12:49:13 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:56.429 * Looking for test storage... 00:05:56.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:56.429 12:49:13 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.429 12:49:13 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.429 12:49:13 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.429 12:49:14 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.429 12:49:14 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:56.429 12:49:14 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.429 12:49:14 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.429 --rc genhtml_branch_coverage=1 00:05:56.429 --rc genhtml_function_coverage=1 00:05:56.429 --rc genhtml_legend=1 00:05:56.429 --rc geninfo_all_blocks=1 00:05:56.429 --rc geninfo_unexecuted_blocks=1 00:05:56.429 00:05:56.429 ' 00:05:56.429 12:49:14 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.429 --rc genhtml_branch_coverage=1 00:05:56.429 --rc genhtml_function_coverage=1 00:05:56.429 --rc genhtml_legend=1 00:05:56.429 --rc geninfo_all_blocks=1 00:05:56.429 --rc geninfo_unexecuted_blocks=1 00:05:56.429 00:05:56.429 ' 00:05:56.429 12:49:14 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.429 --rc genhtml_branch_coverage=1 00:05:56.429 --rc genhtml_function_coverage=1 00:05:56.429 --rc genhtml_legend=1 00:05:56.429 --rc geninfo_all_blocks=1 00:05:56.429 --rc geninfo_unexecuted_blocks=1 00:05:56.429 00:05:56.429 ' 00:05:56.429 12:49:14 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.429 --rc genhtml_branch_coverage=1 00:05:56.429 --rc genhtml_function_coverage=1 00:05:56.429 --rc genhtml_legend=1 00:05:56.429 --rc geninfo_all_blocks=1 00:05:56.429 --rc geninfo_unexecuted_blocks=1 00:05:56.429 00:05:56.429 ' 00:05:56.429 12:49:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:56.429 12:49:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70389 00:05:56.429 12:49:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:56.429 12:49:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.429 12:49:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70389 00:05:56.429 12:49:14 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70389 ']' 00:05:56.429 12:49:14 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.430 12:49:14 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.430 12:49:14 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.430 12:49:14 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.430 12:49:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.690 [2024-11-26 12:49:14.175553] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:56.690 [2024-11-26 12:49:14.175746] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70389 ] 00:05:56.690 [2024-11-26 12:49:14.335178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.950 [2024-11-26 12:49:14.383535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.950 [2024-11-26 12:49:14.383804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.950 [2024-11-26 12:49:14.383928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.950 [2024-11-26 12:49:14.383736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.520 12:49:14 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.520 12:49:14 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:57.520 12:49:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:57.520 12:49:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.520 12:49:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.520 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.520 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.520 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.520 POWER: Cannot set governor of lcore 0 to performance 00:05:57.520 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.520 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.520 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.520 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.520 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:57.520 POWER: Unable to set Power Management Environment for lcore 0 00:05:57.520 [2024-11-26 12:49:15.004784] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:57.520 [2024-11-26 12:49:15.004807] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:57.520 [2024-11-26 12:49:15.004835] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:57.520 [2024-11-26 12:49:15.004868] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:57.520 [2024-11-26 12:49:15.004876] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:57.520 [2024-11-26 12:49:15.004886] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:57.520 12:49:15 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.520 12:49:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:57.520 12:49:15 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.520 12:49:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.520 [2024-11-26 12:49:15.081524] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.520 12:49:15 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.520 12:49:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.520 12:49:15 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.520 12:49:15 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.520 12:49:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.520 ************************************ 00:05:57.520 START TEST scheduler_create_thread 00:05:57.520 ************************************ 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.520 2 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.520 3 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.520 4 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.520 5 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.520 6 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.520 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.520 7 00:05:57.521 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.521 12:49:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:57.521 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.521 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.521 8 00:05:57.521 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.521 12:49:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:57.521 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.521 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.780 9 00:05:57.780 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.780 12:49:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:57.780 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.780 12:49:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.161 10 00:05:59.161 12:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.161 12:49:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.161 12:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.161 12:49:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.732 12:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.732 12:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:59.732 12:49:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:59.732 12:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.732 12:49:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.693 12:49:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.693 12:49:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:00.693 12:49:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.693 12:49:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.263 12:49:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.263 12:49:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:01.263 12:49:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:01.263 12:49:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.263 12:49:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.831 ************************************ 00:06:01.831 END TEST scheduler_create_thread 00:06:01.831 ************************************ 00:06:01.831 12:49:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.831 00:06:01.831 real 0m4.202s 00:06:01.831 user 0m0.025s 00:06:01.831 sys 0m0.011s 00:06:01.831 12:49:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.831 12:49:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.831 12:49:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:01.831 12:49:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70389 00:06:01.831 12:49:19 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70389 ']' 00:06:01.831 12:49:19 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70389 00:06:01.831 12:49:19 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:01.831 12:49:19 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.831 12:49:19 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70389 00:06:01.831 killing process with pid 70389 00:06:01.831 12:49:19 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:01.831 12:49:19 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:01.831 12:49:19 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70389' 00:06:01.831 12:49:19 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70389 00:06:01.831 12:49:19 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70389 00:06:02.091 [2024-11-26 12:49:19.576552] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:02.352 ************************************ 00:06:02.352 END TEST event_scheduler 00:06:02.352 ************************************ 00:06:02.352 00:06:02.352 real 0m6.012s 00:06:02.352 user 0m13.360s 00:06:02.352 sys 0m0.510s 00:06:02.352 12:49:19 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.352 12:49:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.352 12:49:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:02.352 12:49:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:02.352 12:49:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.352 12:49:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.352 12:49:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.352 ************************************ 00:06:02.352 START TEST app_repeat 00:06:02.352 ************************************ 00:06:02.352 12:49:19 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70500 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70500' 00:06:02.352 Process app_repeat pid: 70500 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:02.352 spdk_app_start Round 0 00:06:02.352 12:49:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70500 /var/tmp/spdk-nbd.sock 00:06:02.352 12:49:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70500 ']' 00:06:02.352 12:49:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.352 12:49:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.352 12:49:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.352 12:49:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.352 12:49:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.352 [2024-11-26 12:49:20.019772] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:02.352 [2024-11-26 12:49:20.019957] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70500 ] 00:06:02.612 [2024-11-26 12:49:20.180543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.612 [2024-11-26 12:49:20.251003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.612 [2024-11-26 12:49:20.251109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.552 12:49:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.552 12:49:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:03.552 12:49:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.552 Malloc0 00:06:03.552 12:49:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.812 Malloc1 00:06:03.812 12:49:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.812 12:49:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.071 /dev/nbd0 00:06:04.071 12:49:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.071 12:49:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.071 12:49:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:04.071 12:49:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:04.071 12:49:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.071 12:49:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.071 12:49:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:04.071 12:49:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:04.071 12:49:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.071 12:49:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.071 12:49:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.071 1+0 records in 00:06:04.071 1+0 records out 00:06:04.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229324 s, 17.9 MB/s 00:06:04.072 12:49:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.072 12:49:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:04.072 12:49:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.072 12:49:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:04.072 12:49:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:04.072 12:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.072 12:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.072 12:49:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.331 /dev/nbd1 00:06:04.331 12:49:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.331 12:49:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.331 1+0 records in 00:06:04.331 1+0 records out 00:06:04.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318177 s, 12.9 MB/s 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:04.331 12:49:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:04.331 12:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.331 12:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.331 12:49:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.331 12:49:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.331 12:49:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.592 12:49:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.592 { 00:06:04.592 "nbd_device": "/dev/nbd0", 00:06:04.592 "bdev_name": "Malloc0" 00:06:04.592 }, 00:06:04.592 { 00:06:04.592 "nbd_device": "/dev/nbd1", 00:06:04.592 "bdev_name": "Malloc1" 00:06:04.592 } 00:06:04.592 ]' 00:06:04.592 12:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.592 { 00:06:04.592 "nbd_device": "/dev/nbd0", 00:06:04.592 "bdev_name": "Malloc0" 00:06:04.592 }, 00:06:04.592 { 00:06:04.592 "nbd_device": "/dev/nbd1", 00:06:04.592 "bdev_name": "Malloc1" 00:06:04.592 } 00:06:04.592 ]' 00:06:04.592 12:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.592 12:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.592 /dev/nbd1' 00:06:04.592 12:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.592 /dev/nbd1' 00:06:04.592 12:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.593 256+0 records in 00:06:04.593 256+0 records out 00:06:04.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00578711 s, 181 MB/s 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.593 256+0 records in 00:06:04.593 256+0 records out 00:06:04.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216307 s, 48.5 MB/s 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.593 256+0 records in 00:06:04.593 256+0 records out 00:06:04.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247234 s, 42.4 MB/s 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.593 12:49:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.853 12:49:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.853 12:49:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.853 12:49:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.853 12:49:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.853 12:49:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.853 12:49:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.853 12:49:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.853 12:49:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.853 12:49:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.853 12:49:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.114 12:49:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.373 12:49:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.373 12:49:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.632 12:49:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.891 [2024-11-26 12:49:23.425879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.891 [2024-11-26 12:49:23.498552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.891 [2024-11-26 12:49:23.498557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.151 [2024-11-26 12:49:23.575749] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.151 [2024-11-26 12:49:23.575813] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.687 12:49:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.687 spdk_app_start Round 1 00:06:08.687 12:49:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:08.687 12:49:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70500 /var/tmp/spdk-nbd.sock 00:06:08.687 12:49:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70500 ']' 00:06:08.687 12:49:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.687 12:49:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.687 12:49:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.687 12:49:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.687 12:49:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.687 12:49:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.687 12:49:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:08.687 12:49:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.946 Malloc0 00:06:08.946 12:49:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.205 Malloc1 00:06:09.205 12:49:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.205 12:49:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.205 12:49:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.205 12:49:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.205 12:49:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.205 12:49:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.205 12:49:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.206 12:49:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.206 12:49:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.206 12:49:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.206 12:49:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.206 12:49:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.206 12:49:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.206 12:49:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.206 12:49:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.206 12:49:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.465 /dev/nbd0 00:06:09.465 12:49:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.465 12:49:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.465 1+0 records in 00:06:09.465 1+0 records out 00:06:09.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353332 s, 11.6 MB/s 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:09.465 12:49:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.465 12:49:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.465 12:49:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:09.465 12:49:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.465 12:49:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.465 12:49:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.724 /dev/nbd1 00:06:09.724 12:49:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.724 12:49:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.724 1+0 records in 00:06:09.724 1+0 records out 00:06:09.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038961 s, 10.5 MB/s 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.724 12:49:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:09.724 12:49:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.724 12:49:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.724 12:49:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.724 12:49:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.724 12:49:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.984 { 00:06:09.984 "nbd_device": "/dev/nbd0", 00:06:09.984 "bdev_name": "Malloc0" 00:06:09.984 }, 00:06:09.984 { 00:06:09.984 "nbd_device": "/dev/nbd1", 00:06:09.984 "bdev_name": "Malloc1" 00:06:09.984 } 00:06:09.984 ]' 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.984 { 00:06:09.984 "nbd_device": "/dev/nbd0", 00:06:09.984 "bdev_name": "Malloc0" 00:06:09.984 }, 00:06:09.984 { 00:06:09.984 "nbd_device": "/dev/nbd1", 00:06:09.984 "bdev_name": "Malloc1" 00:06:09.984 } 00:06:09.984 ]' 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.984 /dev/nbd1' 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.984 /dev/nbd1' 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.984 256+0 records in 00:06:09.984 256+0 records out 00:06:09.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134734 s, 77.8 MB/s 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.984 256+0 records in 00:06:09.984 256+0 records out 00:06:09.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183939 s, 57.0 MB/s 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.984 256+0 records in 00:06:09.984 256+0 records out 00:06:09.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270418 s, 38.8 MB/s 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.984 12:49:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.244 12:49:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.244 12:49:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.244 12:49:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.244 12:49:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.244 12:49:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.244 12:49:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.244 12:49:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.244 12:49:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.244 12:49:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.244 12:49:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.503 12:49:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.762 12:49:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.762 12:49:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.022 12:49:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.280 [2024-11-26 12:49:28.840505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.280 [2024-11-26 12:49:28.907939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.280 [2024-11-26 12:49:28.907958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.539 [2024-11-26 12:49:28.984099] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.539 [2024-11-26 12:49:28.984160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.077 spdk_app_start Round 2 00:06:14.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.077 12:49:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.077 12:49:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:14.077 12:49:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70500 /var/tmp/spdk-nbd.sock 00:06:14.077 12:49:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70500 ']' 00:06:14.077 12:49:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.077 12:49:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.077 12:49:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.077 12:49:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.077 12:49:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.077 12:49:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.077 12:49:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:14.077 12:49:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.337 Malloc0 00:06:14.337 12:49:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.597 Malloc1 00:06:14.597 12:49:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.597 12:49:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.597 12:49:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.597 12:49:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.597 12:49:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.597 12:49:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.597 12:49:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.597 12:49:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.597 12:49:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.597 12:49:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.598 12:49:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.598 12:49:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.598 12:49:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.598 12:49:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.598 12:49:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.598 12:49:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.858 /dev/nbd0 00:06:14.858 12:49:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.858 12:49:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.858 1+0 records in 00:06:14.858 1+0 records out 00:06:14.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367093 s, 11.2 MB/s 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:14.858 12:49:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:14.858 12:49:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.858 12:49:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.858 12:49:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.119 /dev/nbd1 00:06:15.119 12:49:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.119 12:49:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.119 1+0 records in 00:06:15.119 1+0 records out 00:06:15.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340134 s, 12.0 MB/s 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.119 12:49:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.119 12:49:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.119 12:49:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.119 12:49:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.119 12:49:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.119 12:49:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.429 12:49:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.429 { 00:06:15.429 "nbd_device": "/dev/nbd0", 00:06:15.429 "bdev_name": "Malloc0" 00:06:15.429 }, 00:06:15.429 { 00:06:15.429 "nbd_device": "/dev/nbd1", 00:06:15.429 "bdev_name": "Malloc1" 00:06:15.429 } 00:06:15.429 ]' 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.430 { 00:06:15.430 "nbd_device": "/dev/nbd0", 00:06:15.430 "bdev_name": "Malloc0" 00:06:15.430 }, 00:06:15.430 { 00:06:15.430 "nbd_device": "/dev/nbd1", 00:06:15.430 "bdev_name": "Malloc1" 00:06:15.430 } 00:06:15.430 ]' 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.430 /dev/nbd1' 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.430 /dev/nbd1' 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.430 256+0 records in 00:06:15.430 256+0 records out 00:06:15.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137166 s, 76.4 MB/s 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.430 256+0 records in 00:06:15.430 256+0 records out 00:06:15.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213205 s, 49.2 MB/s 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.430 256+0 records in 00:06:15.430 256+0 records out 00:06:15.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229325 s, 45.7 MB/s 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.430 12:49:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.701 12:49:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.701 12:49:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.701 12:49:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.701 12:49:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.701 12:49:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.701 12:49:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.701 12:49:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.701 12:49:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.701 12:49:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.701 12:49:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.961 12:49:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.220 12:49:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.220 12:49:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.220 12:49:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.220 12:49:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.220 12:49:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.220 12:49:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.221 12:49:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.221 12:49:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.221 12:49:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.221 12:49:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.480 12:49:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.740 [2024-11-26 12:49:34.220907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.740 [2024-11-26 12:49:34.290253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.740 [2024-11-26 12:49:34.290261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.740 [2024-11-26 12:49:34.367221] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.740 [2024-11-26 12:49:34.367293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.293 12:49:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70500 /var/tmp/spdk-nbd.sock 00:06:19.293 12:49:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70500 ']' 00:06:19.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.293 12:49:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.293 12:49:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.293 12:49:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.293 12:49:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.293 12:49:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:19.553 12:49:37 event.app_repeat -- event/event.sh@39 -- # killprocess 70500 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70500 ']' 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70500 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70500 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70500' 00:06:19.553 killing process with pid 70500 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70500 00:06:19.553 12:49:37 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70500 00:06:19.814 spdk_app_start is called in Round 0. 00:06:19.814 Shutdown signal received, stop current app iteration 00:06:19.814 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:19.814 spdk_app_start is called in Round 1. 00:06:19.814 Shutdown signal received, stop current app iteration 00:06:19.814 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:19.814 spdk_app_start is called in Round 2. 00:06:19.814 Shutdown signal received, stop current app iteration 00:06:19.814 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:19.814 spdk_app_start is called in Round 3. 00:06:19.814 Shutdown signal received, stop current app iteration 00:06:19.814 12:49:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:19.814 ************************************ 00:06:19.814 END TEST app_repeat 00:06:19.814 ************************************ 00:06:19.814 12:49:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:19.814 00:06:19.814 real 0m17.525s 00:06:19.814 user 0m37.861s 00:06:19.814 sys 0m2.897s 00:06:19.814 12:49:37 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.814 12:49:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.074 12:49:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:20.074 12:49:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:20.074 12:49:37 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.074 12:49:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.074 12:49:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.074 ************************************ 00:06:20.074 START TEST cpu_locks 00:06:20.074 ************************************ 00:06:20.074 12:49:37 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:20.074 * Looking for test storage... 00:06:20.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:20.074 12:49:37 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.074 12:49:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.074 12:49:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.335 12:49:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.335 12:49:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:20.335 12:49:37 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.335 12:49:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.335 --rc genhtml_branch_coverage=1 00:06:20.335 --rc genhtml_function_coverage=1 00:06:20.335 --rc genhtml_legend=1 00:06:20.335 --rc geninfo_all_blocks=1 00:06:20.335 --rc geninfo_unexecuted_blocks=1 00:06:20.335 00:06:20.335 ' 00:06:20.335 12:49:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.335 --rc genhtml_branch_coverage=1 00:06:20.335 --rc genhtml_function_coverage=1 00:06:20.335 --rc genhtml_legend=1 00:06:20.335 --rc geninfo_all_blocks=1 00:06:20.335 --rc geninfo_unexecuted_blocks=1 00:06:20.335 00:06:20.335 ' 00:06:20.335 12:49:37 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.336 --rc genhtml_branch_coverage=1 00:06:20.336 --rc genhtml_function_coverage=1 00:06:20.336 --rc genhtml_legend=1 00:06:20.336 --rc geninfo_all_blocks=1 00:06:20.336 --rc geninfo_unexecuted_blocks=1 00:06:20.336 00:06:20.336 ' 00:06:20.336 12:49:37 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.336 --rc genhtml_branch_coverage=1 00:06:20.336 --rc genhtml_function_coverage=1 00:06:20.336 --rc genhtml_legend=1 00:06:20.336 --rc geninfo_all_blocks=1 00:06:20.336 --rc geninfo_unexecuted_blocks=1 00:06:20.336 00:06:20.336 ' 00:06:20.336 12:49:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:20.336 12:49:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:20.336 12:49:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:20.336 12:49:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:20.336 12:49:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.336 12:49:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.336 12:49:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.336 ************************************ 00:06:20.336 START TEST default_locks 00:06:20.336 ************************************ 00:06:20.336 12:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:20.336 12:49:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70933 00:06:20.336 12:49:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.336 12:49:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70933 00:06:20.336 12:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70933 ']' 00:06:20.336 12:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.336 12:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.336 12:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.336 12:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.336 12:49:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.336 [2024-11-26 12:49:37.885246] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:20.336 [2024-11-26 12:49:37.885443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70933 ] 00:06:20.596 [2024-11-26 12:49:38.045780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.596 [2024-11-26 12:49:38.122667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.166 12:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.166 12:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:21.166 12:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70933 00:06:21.166 12:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70933 00:06:21.166 12:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70933 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70933 ']' 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70933 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70933 00:06:21.736 killing process with pid 70933 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70933' 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70933 00:06:21.736 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70933 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70933 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70933 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:22.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70933 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70933 ']' 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.677 ERROR: process (pid: 70933) is no longer running 00:06:22.677 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70933) - No such process 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:22.677 00:06:22.677 real 0m2.209s 00:06:22.677 user 0m2.031s 00:06:22.677 sys 0m0.892s 00:06:22.677 ************************************ 00:06:22.677 END TEST default_locks 00:06:22.677 ************************************ 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.677 12:49:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.677 12:49:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:22.677 12:49:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.677 12:49:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.677 12:49:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.677 ************************************ 00:06:22.677 START TEST default_locks_via_rpc 00:06:22.677 ************************************ 00:06:22.677 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:22.677 12:49:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70987 00:06:22.677 12:49:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.677 12:49:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70987 00:06:22.677 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70987 ']' 00:06:22.677 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.677 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.677 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.677 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.677 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.677 [2024-11-26 12:49:40.162307] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:22.677 [2024-11-26 12:49:40.162444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70987 ] 00:06:22.677 [2024-11-26 12:49:40.323208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.937 [2024-11-26 12:49:40.401733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.508 12:49:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.508 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.508 12:49:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70987 00:06:23.508 12:49:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70987 00:06:23.508 12:49:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70987 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70987 ']' 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70987 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70987 00:06:23.768 killing process with pid 70987 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70987' 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70987 00:06:23.768 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70987 00:06:24.339 ************************************ 00:06:24.339 END TEST default_locks_via_rpc 00:06:24.339 ************************************ 00:06:24.339 00:06:24.339 real 0m1.886s 00:06:24.339 user 0m1.698s 00:06:24.339 sys 0m0.690s 00:06:24.339 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.339 12:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.339 12:49:42 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:24.339 12:49:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.339 12:49:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.339 12:49:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.339 ************************************ 00:06:24.339 START TEST non_locking_app_on_locked_coremask 00:06:24.339 ************************************ 00:06:24.339 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:24.599 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71039 00:06:24.599 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.599 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71039 /var/tmp/spdk.sock 00:06:24.599 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71039 ']' 00:06:24.599 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.599 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.599 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.600 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.600 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.600 [2024-11-26 12:49:42.109499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:24.600 [2024-11-26 12:49:42.109720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71039 ] 00:06:24.600 [2024-11-26 12:49:42.272205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.860 [2024-11-26 12:49:42.339910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71055 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71055 /var/tmp/spdk2.sock 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71055 ']' 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.451 12:49:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.451 [2024-11-26 12:49:42.981967] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:25.451 [2024-11-26 12:49:42.982183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71055 ] 00:06:25.724 [2024-11-26 12:49:43.150787] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.724 [2024-11-26 12:49:43.150849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.724 [2024-11-26 12:49:43.299811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.320 12:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.320 12:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:26.320 12:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71039 00:06:26.320 12:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71039 00:06:26.320 12:49:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71039 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71039 ']' 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71039 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71039 00:06:27.260 killing process with pid 71039 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71039' 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71039 00:06:27.260 12:49:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71039 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71055 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71055 ']' 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71055 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71055 00:06:28.641 killing process with pid 71055 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71055' 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71055 00:06:28.641 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71055 00:06:29.211 00:06:29.211 real 0m4.772s 00:06:29.211 user 0m4.618s 00:06:29.211 sys 0m1.532s 00:06:29.211 ************************************ 00:06:29.211 END TEST non_locking_app_on_locked_coremask 00:06:29.211 ************************************ 00:06:29.211 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.211 12:49:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.211 12:49:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:29.211 12:49:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.211 12:49:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.211 12:49:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.211 ************************************ 00:06:29.211 START TEST locking_app_on_unlocked_coremask 00:06:29.211 ************************************ 00:06:29.211 12:49:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:29.211 12:49:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71126 00:06:29.211 12:49:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71126 /var/tmp/spdk.sock 00:06:29.211 12:49:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:29.211 12:49:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71126 ']' 00:06:29.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.211 12:49:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.211 12:49:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.211 12:49:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.211 12:49:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.211 12:49:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.471 [2024-11-26 12:49:46.948432] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.471 [2024-11-26 12:49:46.948616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71126 ] 00:06:29.471 [2024-11-26 12:49:47.111109] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.471 [2024-11-26 12:49:47.111221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.731 [2024-11-26 12:49:47.180987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71142 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71142 /var/tmp/spdk2.sock 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71142 ']' 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.300 12:49:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.300 [2024-11-26 12:49:47.850096] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:30.300 [2024-11-26 12:49:47.850349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71142 ] 00:06:30.559 [2024-11-26 12:49:47.997708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.559 [2024-11-26 12:49:48.144501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.129 12:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.129 12:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:31.129 12:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71142 00:06:31.129 12:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71142 00:06:31.129 12:49:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.066 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71126 00:06:32.066 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71126 ']' 00:06:32.066 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71126 00:06:32.066 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:32.066 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.066 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71126 00:06:32.066 killing process with pid 71126 00:06:32.066 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.067 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.067 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71126' 00:06:32.067 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71126 00:06:32.067 12:49:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71126 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71142 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71142 ']' 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71142 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71142 00:06:33.448 killing process with pid 71142 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71142' 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71142 00:06:33.448 12:49:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71142 00:06:34.018 00:06:34.018 real 0m4.734s 00:06:34.018 user 0m4.605s 00:06:34.018 sys 0m1.537s 00:06:34.018 ************************************ 00:06:34.018 END TEST locking_app_on_unlocked_coremask 00:06:34.018 ************************************ 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.018 12:49:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:34.018 12:49:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.018 12:49:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.018 12:49:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.018 ************************************ 00:06:34.018 START TEST locking_app_on_locked_coremask 00:06:34.018 ************************************ 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71222 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71222 /var/tmp/spdk.sock 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71222 ']' 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.018 12:49:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.278 [2024-11-26 12:49:51.744827] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:34.278 [2024-11-26 12:49:51.745042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71222 ] 00:06:34.278 [2024-11-26 12:49:51.905347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.538 [2024-11-26 12:49:51.973002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71238 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71238 /var/tmp/spdk2.sock 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71238 /var/tmp/spdk2.sock 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71238 /var/tmp/spdk2.sock 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71238 ']' 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.108 12:49:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.108 [2024-11-26 12:49:52.644937] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.108 [2024-11-26 12:49:52.645164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71238 ] 00:06:35.368 [2024-11-26 12:49:52.795016] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71222 has claimed it. 00:06:35.368 [2024-11-26 12:49:52.795089] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.628 ERROR: process (pid: 71238) is no longer running 00:06:35.628 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71238) - No such process 00:06:35.628 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.628 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:35.628 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:35.628 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.628 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.628 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.628 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71222 00:06:35.628 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71222 00:06:35.628 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.887 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71222 00:06:35.887 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71222 ']' 00:06:35.887 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71222 00:06:35.887 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:35.887 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.887 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71222 00:06:36.147 killing process with pid 71222 00:06:36.147 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.147 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.147 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71222' 00:06:36.147 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71222 00:06:36.147 12:49:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71222 00:06:36.718 00:06:36.718 real 0m2.560s 00:06:36.718 user 0m2.590s 00:06:36.718 sys 0m0.788s 00:06:36.718 12:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.718 12:49:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.718 ************************************ 00:06:36.718 END TEST locking_app_on_locked_coremask 00:06:36.718 ************************************ 00:06:36.718 12:49:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:36.718 12:49:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.718 12:49:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.718 12:49:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.718 ************************************ 00:06:36.718 START TEST locking_overlapped_coremask 00:06:36.718 ************************************ 00:06:36.718 12:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:36.718 12:49:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71288 00:06:36.718 12:49:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:36.718 12:49:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71288 /var/tmp/spdk.sock 00:06:36.718 12:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71288 ']' 00:06:36.718 12:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.718 12:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.718 12:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.718 12:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.718 12:49:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.718 [2024-11-26 12:49:54.373990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:36.718 [2024-11-26 12:49:54.374109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71288 ] 00:06:36.978 [2024-11-26 12:49:54.534513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.978 [2024-11-26 12:49:54.604266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.978 [2024-11-26 12:49:54.604314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.978 [2024-11-26 12:49:54.604432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71301 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71301 /var/tmp/spdk2.sock 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71301 /var/tmp/spdk2.sock 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:37.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71301 /var/tmp/spdk2.sock 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71301 ']' 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.547 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.806 [2024-11-26 12:49:55.279233] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:37.806 [2024-11-26 12:49:55.279389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71301 ] 00:06:37.806 [2024-11-26 12:49:55.433631] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71288 has claimed it. 00:06:37.806 [2024-11-26 12:49:55.433707] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.384 ERROR: process (pid: 71301) is no longer running 00:06:38.384 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71301) - No such process 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71288 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71288 ']' 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71288 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71288 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.384 killing process with pid 71288 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71288' 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71288 00:06:38.384 12:49:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71288 00:06:38.986 00:06:38.986 real 0m2.325s 00:06:38.986 user 0m5.922s 00:06:38.986 sys 0m0.690s 00:06:38.987 12:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.987 12:49:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.987 ************************************ 00:06:38.987 END TEST locking_overlapped_coremask 00:06:38.987 ************************************ 00:06:38.987 12:49:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:38.987 12:49:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.987 12:49:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.987 12:49:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.247 ************************************ 00:06:39.247 START TEST locking_overlapped_coremask_via_rpc 00:06:39.247 ************************************ 00:06:39.247 12:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:39.247 12:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71353 00:06:39.247 12:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71353 /var/tmp/spdk.sock 00:06:39.247 12:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.247 12:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71353 ']' 00:06:39.247 12:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.247 12:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.247 12:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.247 12:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.247 12:49:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.247 [2024-11-26 12:49:56.770319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.247 [2024-11-26 12:49:56.770474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71353 ] 00:06:39.507 [2024-11-26 12:49:56.932857] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.507 [2024-11-26 12:49:56.932923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.507 [2024-11-26 12:49:57.005924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.507 [2024-11-26 12:49:57.006130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.507 [2024-11-26 12:49:57.006277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.075 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.075 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.075 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:40.075 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71371 00:06:40.075 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71371 /var/tmp/spdk2.sock 00:06:40.075 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71371 ']' 00:06:40.076 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.076 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.076 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.076 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.076 12:49:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.076 [2024-11-26 12:49:57.653763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:40.076 [2024-11-26 12:49:57.653888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71371 ] 00:06:40.336 [2024-11-26 12:49:57.807299] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.336 [2024-11-26 12:49:57.807378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.336 [2024-11-26 12:49:57.978347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.336 [2024-11-26 12:49:57.981405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.336 [2024-11-26 12:49:57.981516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.274 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.274 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:41.274 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.274 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.274 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.274 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.274 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.275 [2024-11-26 12:49:58.725385] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71353 has claimed it. 00:06:41.275 request: 00:06:41.275 { 00:06:41.275 "method": "framework_enable_cpumask_locks", 00:06:41.275 "req_id": 1 00:06:41.275 } 00:06:41.275 Got JSON-RPC error response 00:06:41.275 response: 00:06:41.275 { 00:06:41.275 "code": -32603, 00:06:41.275 "message": "Failed to claim CPU core: 2" 00:06:41.275 } 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71353 /var/tmp/spdk.sock 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71353 ']' 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71371 /var/tmp/spdk2.sock 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71371 ']' 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.275 12:49:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.535 12:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.535 12:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:41.535 12:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.535 12:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.535 12:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.535 12:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.535 00:06:41.535 real 0m2.480s 00:06:41.535 user 0m1.019s 00:06:41.535 sys 0m0.179s 00:06:41.535 12:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.535 12:49:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.535 ************************************ 00:06:41.535 END TEST locking_overlapped_coremask_via_rpc 00:06:41.535 ************************************ 00:06:41.535 12:49:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.535 12:49:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71353 ]] 00:06:41.535 12:49:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71353 00:06:41.535 12:49:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71353 ']' 00:06:41.535 12:49:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71353 00:06:41.535 12:49:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:41.535 12:49:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.795 12:49:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71353 00:06:41.795 12:49:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.795 12:49:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.795 killing process with pid 71353 00:06:41.795 12:49:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71353' 00:06:41.795 12:49:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71353 00:06:41.795 12:49:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71353 00:06:42.363 12:49:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71371 ]] 00:06:42.363 12:49:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71371 00:06:42.363 12:49:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71371 ']' 00:06:42.363 12:49:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71371 00:06:42.363 12:49:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:42.363 12:49:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.363 12:49:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71371 00:06:42.363 12:49:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:42.363 12:49:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:42.363 killing process with pid 71371 00:06:42.363 12:49:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71371' 00:06:42.363 12:49:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71371 00:06:42.363 12:49:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71371 00:06:43.302 12:50:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.302 12:50:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:43.302 12:50:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71353 ]] 00:06:43.302 12:50:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71353 00:06:43.302 12:50:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71353 ']' 00:06:43.302 12:50:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71353 00:06:43.302 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71353) - No such process 00:06:43.302 Process with pid 71353 is not found 00:06:43.302 12:50:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71353 is not found' 00:06:43.302 12:50:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71371 ]] 00:06:43.302 12:50:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71371 00:06:43.302 12:50:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71371 ']' 00:06:43.302 12:50:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71371 00:06:43.302 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71371) - No such process 00:06:43.302 Process with pid 71371 is not found 00:06:43.302 12:50:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71371 is not found' 00:06:43.302 12:50:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.302 00:06:43.302 real 0m23.095s 00:06:43.302 user 0m36.248s 00:06:43.302 sys 0m7.790s 00:06:43.302 12:50:00 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.302 12:50:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.302 ************************************ 00:06:43.302 END TEST cpu_locks 00:06:43.302 ************************************ 00:06:43.302 00:06:43.302 real 0m51.533s 00:06:43.302 user 1m34.232s 00:06:43.302 sys 0m12.004s 00:06:43.302 12:50:00 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.302 12:50:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.302 ************************************ 00:06:43.302 END TEST event 00:06:43.302 ************************************ 00:06:43.302 12:50:00 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.302 12:50:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.302 12:50:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.302 12:50:00 -- common/autotest_common.sh@10 -- # set +x 00:06:43.302 ************************************ 00:06:43.302 START TEST thread 00:06:43.302 ************************************ 00:06:43.302 12:50:00 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:43.302 * Looking for test storage... 00:06:43.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:43.302 12:50:00 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:43.302 12:50:00 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:43.302 12:50:00 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:43.302 12:50:00 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:43.302 12:50:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.302 12:50:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.302 12:50:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.302 12:50:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.302 12:50:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.302 12:50:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.302 12:50:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.302 12:50:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.302 12:50:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.302 12:50:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.302 12:50:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.302 12:50:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:43.302 12:50:00 thread -- scripts/common.sh@345 -- # : 1 00:06:43.302 12:50:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.302 12:50:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.562 12:50:00 thread -- scripts/common.sh@365 -- # decimal 1 00:06:43.562 12:50:00 thread -- scripts/common.sh@353 -- # local d=1 00:06:43.562 12:50:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.562 12:50:00 thread -- scripts/common.sh@355 -- # echo 1 00:06:43.562 12:50:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.562 12:50:00 thread -- scripts/common.sh@366 -- # decimal 2 00:06:43.562 12:50:00 thread -- scripts/common.sh@353 -- # local d=2 00:06:43.562 12:50:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.562 12:50:00 thread -- scripts/common.sh@355 -- # echo 2 00:06:43.562 12:50:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.562 12:50:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.562 12:50:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.562 12:50:00 thread -- scripts/common.sh@368 -- # return 0 00:06:43.562 12:50:00 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.562 12:50:00 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:43.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.562 --rc genhtml_branch_coverage=1 00:06:43.562 --rc genhtml_function_coverage=1 00:06:43.562 --rc genhtml_legend=1 00:06:43.562 --rc geninfo_all_blocks=1 00:06:43.562 --rc geninfo_unexecuted_blocks=1 00:06:43.562 00:06:43.562 ' 00:06:43.562 12:50:00 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:43.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.562 --rc genhtml_branch_coverage=1 00:06:43.562 --rc genhtml_function_coverage=1 00:06:43.562 --rc genhtml_legend=1 00:06:43.562 --rc geninfo_all_blocks=1 00:06:43.562 --rc geninfo_unexecuted_blocks=1 00:06:43.562 00:06:43.562 ' 00:06:43.562 12:50:00 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:43.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.562 --rc genhtml_branch_coverage=1 00:06:43.562 --rc genhtml_function_coverage=1 00:06:43.562 --rc genhtml_legend=1 00:06:43.562 --rc geninfo_all_blocks=1 00:06:43.562 --rc geninfo_unexecuted_blocks=1 00:06:43.562 00:06:43.562 ' 00:06:43.562 12:50:00 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:43.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.562 --rc genhtml_branch_coverage=1 00:06:43.562 --rc genhtml_function_coverage=1 00:06:43.562 --rc genhtml_legend=1 00:06:43.562 --rc geninfo_all_blocks=1 00:06:43.562 --rc geninfo_unexecuted_blocks=1 00:06:43.562 00:06:43.562 ' 00:06:43.562 12:50:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.562 12:50:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:43.562 12:50:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.562 12:50:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.562 ************************************ 00:06:43.562 START TEST thread_poller_perf 00:06:43.562 ************************************ 00:06:43.562 12:50:01 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:43.562 [2024-11-26 12:50:01.056789] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.562 [2024-11-26 12:50:01.056926] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71509 ] 00:06:43.562 [2024-11-26 12:50:01.211704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.821 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.821 [2024-11-26 12:50:01.283823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.758 [2024-11-26T12:50:02.442Z] ====================================== 00:06:44.758 [2024-11-26T12:50:02.442Z] busy:2300609192 (cyc) 00:06:44.758 [2024-11-26T12:50:02.442Z] total_run_count: 421000 00:06:44.758 [2024-11-26T12:50:02.442Z] tsc_hz: 2290000000 (cyc) 00:06:44.758 [2024-11-26T12:50:02.442Z] ====================================== 00:06:44.758 [2024-11-26T12:50:02.442Z] poller_cost: 5464 (cyc), 2386 (nsec) 00:06:44.758 00:06:44.758 real 0m1.423s 00:06:44.758 user 0m1.182s 00:06:44.758 sys 0m0.135s 00:06:44.758 12:50:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.758 12:50:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.758 ************************************ 00:06:44.758 END TEST thread_poller_perf 00:06:44.759 ************************************ 00:06:45.019 12:50:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.019 12:50:02 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:45.019 12:50:02 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.019 12:50:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.019 ************************************ 00:06:45.019 START TEST thread_poller_perf 00:06:45.019 ************************************ 00:06:45.019 12:50:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.019 [2024-11-26 12:50:02.550591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.019 [2024-11-26 12:50:02.550722] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71546 ] 00:06:45.279 [2024-11-26 12:50:02.710635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.279 [2024-11-26 12:50:02.781372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.279 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:46.660 [2024-11-26T12:50:04.344Z] ====================================== 00:06:46.660 [2024-11-26T12:50:04.344Z] busy:2293684008 (cyc) 00:06:46.660 [2024-11-26T12:50:04.344Z] total_run_count: 5380000 00:06:46.660 [2024-11-26T12:50:04.344Z] tsc_hz: 2290000000 (cyc) 00:06:46.660 [2024-11-26T12:50:04.344Z] ====================================== 00:06:46.660 [2024-11-26T12:50:04.344Z] poller_cost: 426 (cyc), 186 (nsec) 00:06:46.660 00:06:46.660 real 0m1.404s 00:06:46.660 user 0m1.184s 00:06:46.660 sys 0m0.114s 00:06:46.660 12:50:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.660 12:50:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.660 ************************************ 00:06:46.660 END TEST thread_poller_perf 00:06:46.660 ************************************ 00:06:46.660 12:50:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:46.660 00:06:46.660 real 0m3.197s 00:06:46.660 user 0m2.541s 00:06:46.660 sys 0m0.460s 00:06:46.660 12:50:03 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.660 12:50:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.660 ************************************ 00:06:46.660 END TEST thread 00:06:46.660 ************************************ 00:06:46.660 12:50:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:46.660 12:50:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:46.660 12:50:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.660 12:50:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.660 12:50:04 -- common/autotest_common.sh@10 -- # set +x 00:06:46.660 ************************************ 00:06:46.660 START TEST app_cmdline 00:06:46.660 ************************************ 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:46.660 * Looking for test storage... 00:06:46.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.660 12:50:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.660 --rc genhtml_branch_coverage=1 00:06:46.660 --rc genhtml_function_coverage=1 00:06:46.660 --rc genhtml_legend=1 00:06:46.660 --rc geninfo_all_blocks=1 00:06:46.660 --rc geninfo_unexecuted_blocks=1 00:06:46.660 00:06:46.660 ' 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.660 --rc genhtml_branch_coverage=1 00:06:46.660 --rc genhtml_function_coverage=1 00:06:46.660 --rc genhtml_legend=1 00:06:46.660 --rc geninfo_all_blocks=1 00:06:46.660 --rc geninfo_unexecuted_blocks=1 00:06:46.660 00:06:46.660 ' 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.660 --rc genhtml_branch_coverage=1 00:06:46.660 --rc genhtml_function_coverage=1 00:06:46.660 --rc genhtml_legend=1 00:06:46.660 --rc geninfo_all_blocks=1 00:06:46.660 --rc geninfo_unexecuted_blocks=1 00:06:46.660 00:06:46.660 ' 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.660 --rc genhtml_branch_coverage=1 00:06:46.660 --rc genhtml_function_coverage=1 00:06:46.660 --rc genhtml_legend=1 00:06:46.660 --rc geninfo_all_blocks=1 00:06:46.660 --rc geninfo_unexecuted_blocks=1 00:06:46.660 00:06:46.660 ' 00:06:46.660 12:50:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:46.660 12:50:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71629 00:06:46.660 12:50:04 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:46.660 12:50:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71629 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71629 ']' 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.660 12:50:04 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.661 12:50:04 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.661 12:50:04 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.661 12:50:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.921 [2024-11-26 12:50:04.370551] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:46.921 [2024-11-26 12:50:04.370673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71629 ] 00:06:46.921 [2024-11-26 12:50:04.530672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.180 [2024-11-26 12:50:04.612026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.751 12:50:05 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.751 12:50:05 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:47.751 12:50:05 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:47.751 { 00:06:47.751 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:47.751 "fields": { 00:06:47.751 "major": 24, 00:06:47.751 "minor": 9, 00:06:47.751 "patch": 1, 00:06:47.751 "suffix": "-pre", 00:06:47.751 "commit": "b18e1bd62" 00:06:47.751 } 00:06:47.751 } 00:06:47.751 12:50:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:47.751 12:50:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:47.751 12:50:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:47.751 12:50:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:47.751 12:50:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:47.751 12:50:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:47.751 12:50:05 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.751 12:50:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:47.751 12:50:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.751 12:50:05 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.011 12:50:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:48.011 12:50:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:48.011 12:50:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.011 request: 00:06:48.011 { 00:06:48.011 "method": "env_dpdk_get_mem_stats", 00:06:48.011 "req_id": 1 00:06:48.011 } 00:06:48.011 Got JSON-RPC error response 00:06:48.011 response: 00:06:48.011 { 00:06:48.011 "code": -32601, 00:06:48.011 "message": "Method not found" 00:06:48.011 } 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.011 12:50:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71629 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71629 ']' 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71629 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.011 12:50:05 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71629 00:06:48.273 12:50:05 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.273 12:50:05 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.273 killing process with pid 71629 00:06:48.273 12:50:05 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71629' 00:06:48.273 12:50:05 app_cmdline -- common/autotest_common.sh@969 -- # kill 71629 00:06:48.273 12:50:05 app_cmdline -- common/autotest_common.sh@974 -- # wait 71629 00:06:48.842 00:06:48.842 real 0m2.312s 00:06:48.842 user 0m2.389s 00:06:48.842 sys 0m0.730s 00:06:48.842 12:50:06 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.842 12:50:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.842 ************************************ 00:06:48.842 END TEST app_cmdline 00:06:48.842 ************************************ 00:06:48.842 12:50:06 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:48.842 12:50:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.842 12:50:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.842 12:50:06 -- common/autotest_common.sh@10 -- # set +x 00:06:48.842 ************************************ 00:06:48.842 START TEST version 00:06:48.842 ************************************ 00:06:48.842 12:50:06 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.103 * Looking for test storage... 00:06:49.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.103 12:50:06 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:49.103 12:50:06 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:49.103 12:50:06 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:49.103 12:50:06 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:49.104 12:50:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.104 12:50:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.104 12:50:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.104 12:50:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.104 12:50:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.104 12:50:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.104 12:50:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.104 12:50:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.104 12:50:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.104 12:50:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.104 12:50:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.104 12:50:06 version -- scripts/common.sh@344 -- # case "$op" in 00:06:49.104 12:50:06 version -- scripts/common.sh@345 -- # : 1 00:06:49.104 12:50:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.104 12:50:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.104 12:50:06 version -- scripts/common.sh@365 -- # decimal 1 00:06:49.104 12:50:06 version -- scripts/common.sh@353 -- # local d=1 00:06:49.104 12:50:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.104 12:50:06 version -- scripts/common.sh@355 -- # echo 1 00:06:49.104 12:50:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.104 12:50:06 version -- scripts/common.sh@366 -- # decimal 2 00:06:49.104 12:50:06 version -- scripts/common.sh@353 -- # local d=2 00:06:49.104 12:50:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.104 12:50:06 version -- scripts/common.sh@355 -- # echo 2 00:06:49.104 12:50:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.104 12:50:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.104 12:50:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.104 12:50:06 version -- scripts/common.sh@368 -- # return 0 00:06:49.104 12:50:06 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.104 12:50:06 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.104 --rc genhtml_branch_coverage=1 00:06:49.104 --rc genhtml_function_coverage=1 00:06:49.104 --rc genhtml_legend=1 00:06:49.104 --rc geninfo_all_blocks=1 00:06:49.104 --rc geninfo_unexecuted_blocks=1 00:06:49.104 00:06:49.104 ' 00:06:49.104 12:50:06 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.104 --rc genhtml_branch_coverage=1 00:06:49.104 --rc genhtml_function_coverage=1 00:06:49.104 --rc genhtml_legend=1 00:06:49.104 --rc geninfo_all_blocks=1 00:06:49.104 --rc geninfo_unexecuted_blocks=1 00:06:49.104 00:06:49.104 ' 00:06:49.104 12:50:06 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.104 --rc genhtml_branch_coverage=1 00:06:49.104 --rc genhtml_function_coverage=1 00:06:49.104 --rc genhtml_legend=1 00:06:49.104 --rc geninfo_all_blocks=1 00:06:49.104 --rc geninfo_unexecuted_blocks=1 00:06:49.104 00:06:49.104 ' 00:06:49.104 12:50:06 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.104 --rc genhtml_branch_coverage=1 00:06:49.104 --rc genhtml_function_coverage=1 00:06:49.104 --rc genhtml_legend=1 00:06:49.104 --rc geninfo_all_blocks=1 00:06:49.104 --rc geninfo_unexecuted_blocks=1 00:06:49.104 00:06:49.104 ' 00:06:49.104 12:50:06 version -- app/version.sh@17 -- # get_header_version major 00:06:49.104 12:50:06 version -- app/version.sh@14 -- # cut -f2 00:06:49.104 12:50:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.104 12:50:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.104 12:50:06 version -- app/version.sh@17 -- # major=24 00:06:49.104 12:50:06 version -- app/version.sh@18 -- # get_header_version minor 00:06:49.104 12:50:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.104 12:50:06 version -- app/version.sh@14 -- # cut -f2 00:06:49.104 12:50:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.104 12:50:06 version -- app/version.sh@18 -- # minor=9 00:06:49.104 12:50:06 version -- app/version.sh@19 -- # get_header_version patch 00:06:49.104 12:50:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.104 12:50:06 version -- app/version.sh@14 -- # cut -f2 00:06:49.104 12:50:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.104 12:50:06 version -- app/version.sh@19 -- # patch=1 00:06:49.104 12:50:06 version -- app/version.sh@20 -- # get_header_version suffix 00:06:49.104 12:50:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.104 12:50:06 version -- app/version.sh@14 -- # cut -f2 00:06:49.104 12:50:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.104 12:50:06 version -- app/version.sh@20 -- # suffix=-pre 00:06:49.104 12:50:06 version -- app/version.sh@22 -- # version=24.9 00:06:49.104 12:50:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.104 12:50:06 version -- app/version.sh@25 -- # version=24.9.1 00:06:49.104 12:50:06 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:49.104 12:50:06 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:49.104 12:50:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.104 12:50:06 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:49.104 12:50:06 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:49.104 00:06:49.104 real 0m0.320s 00:06:49.104 user 0m0.194s 00:06:49.104 sys 0m0.182s 00:06:49.104 12:50:06 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.104 12:50:06 version -- common/autotest_common.sh@10 -- # set +x 00:06:49.104 ************************************ 00:06:49.104 END TEST version 00:06:49.104 ************************************ 00:06:49.364 12:50:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:49.364 12:50:06 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:49.364 12:50:06 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:49.364 12:50:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.364 12:50:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.364 12:50:06 -- common/autotest_common.sh@10 -- # set +x 00:06:49.364 ************************************ 00:06:49.364 START TEST bdev_raid 00:06:49.364 ************************************ 00:06:49.364 12:50:06 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:49.364 * Looking for test storage... 00:06:49.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:49.364 12:50:06 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:49.364 12:50:06 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:49.364 12:50:06 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:49.364 12:50:07 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.364 12:50:07 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:49.364 12:50:07 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.364 12:50:07 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.364 --rc genhtml_branch_coverage=1 00:06:49.364 --rc genhtml_function_coverage=1 00:06:49.364 --rc genhtml_legend=1 00:06:49.364 --rc geninfo_all_blocks=1 00:06:49.364 --rc geninfo_unexecuted_blocks=1 00:06:49.364 00:06:49.364 ' 00:06:49.364 12:50:07 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.364 --rc genhtml_branch_coverage=1 00:06:49.364 --rc genhtml_function_coverage=1 00:06:49.364 --rc genhtml_legend=1 00:06:49.364 --rc geninfo_all_blocks=1 00:06:49.364 --rc geninfo_unexecuted_blocks=1 00:06:49.364 00:06:49.364 ' 00:06:49.364 12:50:07 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.364 --rc genhtml_branch_coverage=1 00:06:49.364 --rc genhtml_function_coverage=1 00:06:49.364 --rc genhtml_legend=1 00:06:49.364 --rc geninfo_all_blocks=1 00:06:49.364 --rc geninfo_unexecuted_blocks=1 00:06:49.364 00:06:49.364 ' 00:06:49.364 12:50:07 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.364 --rc genhtml_branch_coverage=1 00:06:49.364 --rc genhtml_function_coverage=1 00:06:49.364 --rc genhtml_legend=1 00:06:49.364 --rc geninfo_all_blocks=1 00:06:49.364 --rc geninfo_unexecuted_blocks=1 00:06:49.364 00:06:49.364 ' 00:06:49.364 12:50:07 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:49.364 12:50:07 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:49.364 12:50:07 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:49.364 12:50:07 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:49.364 12:50:07 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:49.364 12:50:07 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:49.364 12:50:07 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:49.364 12:50:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.364 12:50:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.364 12:50:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.364 ************************************ 00:06:49.364 START TEST raid1_resize_data_offset_test 00:06:49.364 ************************************ 00:06:49.364 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:49.624 12:50:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71799 00:06:49.624 Process raid pid: 71799 00:06:49.624 12:50:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71799' 00:06:49.624 12:50:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71799 00:06:49.624 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71799 ']' 00:06:49.624 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.624 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.624 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.624 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.624 12:50:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.624 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.624 [2024-11-26 12:50:07.117914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.624 [2024-11-26 12:50:07.118066] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.624 [2024-11-26 12:50:07.280802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.884 [2024-11-26 12:50:07.355255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.884 [2024-11-26 12:50:07.431031] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.884 [2024-11-26 12:50:07.431078] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.456 malloc0 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.456 malloc1 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.456 12:50:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.456 null0 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.456 [2024-11-26 12:50:08.010543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:50.456 [2024-11-26 12:50:08.012460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:50.456 [2024-11-26 12:50:08.012519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:50.456 [2024-11-26 12:50:08.012654] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:50.456 [2024-11-26 12:50:08.012693] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:50.456 [2024-11-26 12:50:08.012960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:50.456 [2024-11-26 12:50:08.013122] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:50.456 [2024-11-26 12:50:08.013141] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:50.456 [2024-11-26 12:50:08.013307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.456 [2024-11-26 12:50:08.074451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.456 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.716 malloc2 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.716 [2024-11-26 12:50:08.202638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:50.716 [2024-11-26 12:50:08.207016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.716 [2024-11-26 12:50:08.208892] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71799 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71799 ']' 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71799 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71799 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.716 killing process with pid 71799 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71799' 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71799 00:06:50.716 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71799 00:06:50.716 [2024-11-26 12:50:08.281011] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.716 [2024-11-26 12:50:08.281309] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:50.716 [2024-11-26 12:50:08.281371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.716 [2024-11-26 12:50:08.281387] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:50.717 [2024-11-26 12:50:08.286613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.717 [2024-11-26 12:50:08.286910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.717 [2024-11-26 12:50:08.286934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:50.976 [2024-11-26 12:50:08.498883] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.236 12:50:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:51.236 00:06:51.236 real 0m1.705s 00:06:51.236 user 0m1.611s 00:06:51.236 sys 0m0.510s 00:06:51.236 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.236 12:50:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 ************************************ 00:06:51.236 END TEST raid1_resize_data_offset_test 00:06:51.236 ************************************ 00:06:51.236 12:50:08 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:51.236 12:50:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:51.236 12:50:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.236 12:50:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 ************************************ 00:06:51.236 START TEST raid0_resize_superblock_test 00:06:51.236 ************************************ 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71851 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71851' 00:06:51.236 Process raid pid: 71851 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71851 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71851 ']' 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.236 12:50:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 [2024-11-26 12:50:08.897837] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.236 [2024-11-26 12:50:08.897947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.496 [2024-11-26 12:50:09.059993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.496 [2024-11-26 12:50:09.128360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.755 [2024-11-26 12:50:09.203556] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.755 [2024-11-26 12:50:09.203597] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.324 malloc0 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.324 [2024-11-26 12:50:09.937941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:52.324 [2024-11-26 12:50:09.938010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.324 [2024-11-26 12:50:09.938036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:52.324 [2024-11-26 12:50:09.938048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.324 [2024-11-26 12:50:09.940576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.324 [2024-11-26 12:50:09.940614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:52.324 pt0 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.324 12:50:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.584 7a6ca3b4-1836-4d6c-bd17-ecd97827094a 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.584 5ee794e6-a1d8-4996-b2f7-d3b37e4fc284 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.584 7b9984c1-d8b0-4f58-a636-0f8664aabfc2 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.584 [2024-11-26 12:50:10.144370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5ee794e6-a1d8-4996-b2f7-d3b37e4fc284 is claimed 00:06:52.584 [2024-11-26 12:50:10.144455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7b9984c1-d8b0-4f58-a636-0f8664aabfc2 is claimed 00:06:52.584 [2024-11-26 12:50:10.144589] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:52.584 [2024-11-26 12:50:10.144604] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:52.584 [2024-11-26 12:50:10.144862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:52.584 [2024-11-26 12:50:10.145034] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:52.584 [2024-11-26 12:50:10.145051] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:52.584 [2024-11-26 12:50:10.145201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.584 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.584 [2024-11-26 12:50:10.260391] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.845 [2024-11-26 12:50:10.288246] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.845 [2024-11-26 12:50:10.288268] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5ee794e6-a1d8-4996-b2f7-d3b37e4fc284' was resized: old size 131072, new size 204800 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.845 [2024-11-26 12:50:10.300137] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.845 [2024-11-26 12:50:10.300158] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7b9984c1-d8b0-4f58-a636-0f8664aabfc2' was resized: old size 131072, new size 204800 00:06:52.845 [2024-11-26 12:50:10.300188] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.845 [2024-11-26 12:50:10.412051] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.845 [2024-11-26 12:50:10.459866] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:52.845 [2024-11-26 12:50:10.459931] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:52.845 [2024-11-26 12:50:10.459941] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.845 [2024-11-26 12:50:10.459954] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:52.845 [2024-11-26 12:50:10.460052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.845 [2024-11-26 12:50:10.460089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.845 [2024-11-26 12:50:10.460103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.845 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.845 [2024-11-26 12:50:10.471725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:52.845 [2024-11-26 12:50:10.471779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.845 [2024-11-26 12:50:10.471798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:52.845 [2024-11-26 12:50:10.471810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.845 [2024-11-26 12:50:10.474025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.845 [2024-11-26 12:50:10.474057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:52.845 [2024-11-26 12:50:10.475411] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5ee794e6-a1d8-4996-b2f7-d3b37e4fc284 00:06:52.845 [2024-11-26 12:50:10.475467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5ee794e6-a1d8-4996-b2f7-d3b37e4fc284 is claimed 00:06:52.845 [2024-11-26 12:50:10.475549] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7b9984c1-d8b0-4f58-a636-0f8664aabfc2 00:06:52.845 [2024-11-26 12:50:10.475570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7b9984c1-d8b0-4f58-a636-0f8664aabfc2 is claimed 00:06:52.845 [2024-11-26 12:50:10.475647] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 7b9984c1-d8b0-4f58-a636-0f8664aabfc2 (2) smaller than existing raid bdev Raid (3) 00:06:52.845 [2024-11-26 12:50:10.475668] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 5ee794e6-a1d8-4996-b2f7-d3b37e4fc284: File exists 00:06:52.845 [2024-11-26 12:50:10.475704] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:52.845 [2024-11-26 12:50:10.475713] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:52.846 [2024-11-26 12:50:10.475938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:52.846 [2024-11-26 12:50:10.476056] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:52.846 [2024-11-26 12:50:10.476067] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:52.846 [2024-11-26 12:50:10.476210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.846 pt0 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:52.846 [2024-11-26 12:50:10.496378] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.846 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71851 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71851 ']' 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71851 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71851 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.106 killing process with pid 71851 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71851' 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71851 00:06:53.106 [2024-11-26 12:50:10.580803] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.106 [2024-11-26 12:50:10.580857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.106 [2024-11-26 12:50:10.580890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.106 [2024-11-26 12:50:10.580898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:53.106 12:50:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71851 00:06:53.366 [2024-11-26 12:50:10.881462] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.626 12:50:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:53.626 00:06:53.626 real 0m2.437s 00:06:53.626 user 0m2.522s 00:06:53.626 sys 0m0.682s 00:06:53.626 12:50:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.626 12:50:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.626 ************************************ 00:06:53.626 END TEST raid0_resize_superblock_test 00:06:53.626 ************************************ 00:06:53.886 12:50:11 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:53.886 12:50:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.886 12:50:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.886 12:50:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.886 ************************************ 00:06:53.886 START TEST raid1_resize_superblock_test 00:06:53.886 ************************************ 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71927 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.886 Process raid pid: 71927 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71927' 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71927 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71927 ']' 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.886 12:50:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.886 [2024-11-26 12:50:11.402473] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:53.886 [2024-11-26 12:50:11.402609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.886 [2024-11-26 12:50:11.563343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.146 [2024-11-26 12:50:11.641064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.146 [2024-11-26 12:50:11.718600] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.146 [2024-11-26 12:50:11.718640] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.715 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.715 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:54.715 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:54.715 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.715 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.975 malloc0 00:06:54.975 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.975 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:54.975 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.975 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.975 [2024-11-26 12:50:12.424947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:54.975 [2024-11-26 12:50:12.425034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:54.975 [2024-11-26 12:50:12.425060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:54.975 [2024-11-26 12:50:12.425072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:54.975 [2024-11-26 12:50:12.427523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:54.975 [2024-11-26 12:50:12.427564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:54.975 pt0 00:06:54.975 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.975 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:54.975 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.975 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.975 eb70c3fc-c9fb-4226-b649-3ac4f3585dc3 00:06:54.975 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.976 f5eac235-8311-40d1-ad22-f6a3dac63254 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.976 b4266545-488b-44eb-b48a-48ac805f93a1 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.976 [2024-11-26 12:50:12.632655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f5eac235-8311-40d1-ad22-f6a3dac63254 is claimed 00:06:54.976 [2024-11-26 12:50:12.632756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev b4266545-488b-44eb-b48a-48ac805f93a1 is claimed 00:06:54.976 [2024-11-26 12:50:12.632897] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:54.976 [2024-11-26 12:50:12.632913] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:54.976 [2024-11-26 12:50:12.633207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:54.976 [2024-11-26 12:50:12.633380] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:54.976 [2024-11-26 12:50:12.633398] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:54.976 [2024-11-26 12:50:12.633530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.976 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:55.236 [2024-11-26 12:50:12.744620] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.236 [2024-11-26 12:50:12.792426] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.236 [2024-11-26 12:50:12.792450] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f5eac235-8311-40d1-ad22-f6a3dac63254' was resized: old size 131072, new size 204800 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.236 [2024-11-26 12:50:12.804425] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.236 [2024-11-26 12:50:12.804448] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b4266545-488b-44eb-b48a-48ac805f93a1' was resized: old size 131072, new size 204800 00:06:55.236 [2024-11-26 12:50:12.804469] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.236 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.497 [2024-11-26 12:50:12.916366] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.497 [2024-11-26 12:50:12.944167] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:55.497 [2024-11-26 12:50:12.944239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:55.497 [2024-11-26 12:50:12.944276] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:55.497 [2024-11-26 12:50:12.944432] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:55.497 [2024-11-26 12:50:12.944563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.497 [2024-11-26 12:50:12.944618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.497 [2024-11-26 12:50:12.944631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.497 [2024-11-26 12:50:12.956079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:55.497 [2024-11-26 12:50:12.956136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.497 [2024-11-26 12:50:12.956155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:55.497 [2024-11-26 12:50:12.956167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.497 [2024-11-26 12:50:12.958442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.497 [2024-11-26 12:50:12.958473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:55.497 [2024-11-26 12:50:12.959816] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f5eac235-8311-40d1-ad22-f6a3dac63254 00:06:55.497 [2024-11-26 12:50:12.959880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f5eac235-8311-40d1-ad22-f6a3dac63254 is claimed 00:06:55.497 [2024-11-26 12:50:12.959953] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b4266545-488b-44eb-b48a-48ac805f93a1 00:06:55.497 [2024-11-26 12:50:12.959975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev b4266545-488b-44eb-b48a-48ac805f93a1 is claimed 00:06:55.497 [2024-11-26 12:50:12.960051] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b4266545-488b-44eb-b48a-48ac805f93a1 (2) smaller than existing raid bdev Raid (3) 00:06:55.497 [2024-11-26 12:50:12.960081] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f5eac235-8311-40d1-ad22-f6a3dac63254: File exists 00:06:55.497 [2024-11-26 12:50:12.960117] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:55.497 [2024-11-26 12:50:12.960126] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:55.497 [2024-11-26 12:50:12.960358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:55.497 [2024-11-26 12:50:12.960489] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:55.497 [2024-11-26 12:50:12.960501] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:55.497 [2024-11-26 12:50:12.960631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.497 pt0 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.497 [2024-11-26 12:50:12.984302] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.497 12:50:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71927 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71927 ']' 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71927 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71927 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.497 killing process with pid 71927 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71927' 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71927 00:06:55.497 [2024-11-26 12:50:13.064563] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.497 [2024-11-26 12:50:13.064613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.497 [2024-11-26 12:50:13.064650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.497 [2024-11-26 12:50:13.064658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:55.497 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71927 00:06:55.758 [2024-11-26 12:50:13.364931] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.328 12:50:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:56.328 00:06:56.328 real 0m2.414s 00:06:56.328 user 0m2.504s 00:06:56.328 sys 0m0.660s 00:06:56.328 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.328 12:50:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.328 ************************************ 00:06:56.328 END TEST raid1_resize_superblock_test 00:06:56.328 ************************************ 00:06:56.328 12:50:13 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:56.328 12:50:13 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:56.328 12:50:13 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:56.328 12:50:13 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:56.328 12:50:13 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:56.328 12:50:13 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:56.328 12:50:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.328 12:50:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.328 12:50:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.328 ************************************ 00:06:56.328 START TEST raid_function_test_raid0 00:06:56.328 ************************************ 00:06:56.328 12:50:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:56.328 12:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:56.328 12:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:56.328 12:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:56.328 12:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=72008 00:06:56.328 12:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:56.328 12:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72008' 00:06:56.328 Process raid pid: 72008 00:06:56.328 12:50:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 72008 00:06:56.328 12:50:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 72008 ']' 00:06:56.329 12:50:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.329 12:50:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.329 12:50:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.329 12:50:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.329 12:50:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.329 [2024-11-26 12:50:13.915530] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:56.329 [2024-11-26 12:50:13.915661] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.588 [2024-11-26 12:50:14.074334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.588 [2024-11-26 12:50:14.142398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.588 [2024-11-26 12:50:14.217058] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.588 [2024-11-26 12:50:14.217103] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:57.157 Base_1 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:57.157 Base_2 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:57.157 [2024-11-26 12:50:14.795686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:57.157 [2024-11-26 12:50:14.798904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:57.157 [2024-11-26 12:50:14.799004] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:57.157 [2024-11-26 12:50:14.799035] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:57.157 [2024-11-26 12:50:14.799510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:57.157 [2024-11-26 12:50:14.799713] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:57.157 [2024-11-26 12:50:14.799737] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:06:57.157 [2024-11-26 12:50:14.799986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.157 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:57.158 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:57.158 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.158 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:57.158 12:50:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:57.417 12:50:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:57.417 [2024-11-26 12:50:15.035509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:57.417 /dev/nbd0 00:06:57.417 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.417 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.417 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:57.417 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:57.417 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.418 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.418 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:57.418 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:57.418 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.418 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.418 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.418 1+0 records in 00:06:57.418 1+0 records out 00:06:57.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405115 s, 10.1 MB/s 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.678 { 00:06:57.678 "nbd_device": "/dev/nbd0", 00:06:57.678 "bdev_name": "raid" 00:06:57.678 } 00:06:57.678 ]' 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.678 { 00:06:57.678 "nbd_device": "/dev/nbd0", 00:06:57.678 "bdev_name": "raid" 00:06:57.678 } 00:06:57.678 ]' 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.678 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:57.938 4096+0 records in 00:06:57.938 4096+0 records out 00:06:57.938 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0309106 s, 67.8 MB/s 00:06:57.938 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:58.198 4096+0 records in 00:06:58.198 4096+0 records out 00:06:58.198 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.227391 s, 9.2 MB/s 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:58.198 128+0 records in 00:06:58.198 128+0 records out 00:06:58.198 65536 bytes (66 kB, 64 KiB) copied, 0.00116741 s, 56.1 MB/s 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:58.198 2035+0 records in 00:06:58.198 2035+0 records out 00:06:58.198 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0154787 s, 67.3 MB/s 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:58.198 456+0 records in 00:06:58.198 456+0 records out 00:06:58.198 233472 bytes (233 kB, 228 KiB) copied, 0.00302452 s, 77.2 MB/s 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.198 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.463 [2024-11-26 12:50:15.969014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:58.463 12:50:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 72008 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 72008 ']' 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 72008 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72008 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.727 killing process with pid 72008 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72008' 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 72008 00:06:58.727 [2024-11-26 12:50:16.316679] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.727 [2024-11-26 12:50:16.316808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.727 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 72008 00:06:58.727 [2024-11-26 12:50:16.316875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.727 [2024-11-26 12:50:16.316888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:06:58.727 [2024-11-26 12:50:16.358275] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.300 12:50:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:59.300 00:06:59.300 real 0m2.902s 00:06:59.300 user 0m3.437s 00:06:59.300 sys 0m1.026s 00:06:59.300 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.300 12:50:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.300 ************************************ 00:06:59.300 END TEST raid_function_test_raid0 00:06:59.300 ************************************ 00:06:59.300 12:50:16 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:59.300 12:50:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.300 12:50:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.300 12:50:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.300 ************************************ 00:06:59.300 START TEST raid_function_test_concat 00:06:59.300 ************************************ 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72125 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.300 Process raid pid: 72125 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72125' 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72125 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 72125 ']' 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.300 12:50:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:59.300 [2024-11-26 12:50:16.895769] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:59.300 [2024-11-26 12:50:16.895910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.560 [2024-11-26 12:50:17.060275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.560 [2024-11-26 12:50:17.129908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.560 [2024-11-26 12:50:17.205341] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.560 [2024-11-26 12:50:17.205389] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.131 Base_1 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.131 Base_2 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.131 [2024-11-26 12:50:17.781666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:00.131 [2024-11-26 12:50:17.784277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:00.131 [2024-11-26 12:50:17.784353] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:00.131 [2024-11-26 12:50:17.784379] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:00.131 [2024-11-26 12:50:17.784687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:00.131 [2024-11-26 12:50:17.784836] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:00.131 [2024-11-26 12:50:17.784854] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:00.131 [2024-11-26 12:50:17.785058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.131 12:50:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.392 12:50:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:00.392 [2024-11-26 12:50:18.001463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.392 /dev/nbd0 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.392 1+0 records in 00:07:00.392 1+0 records out 00:07:00.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038759 s, 10.6 MB/s 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.392 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:00.652 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.652 { 00:07:00.652 "nbd_device": "/dev/nbd0", 00:07:00.652 "bdev_name": "raid" 00:07:00.652 } 00:07:00.652 ]' 00:07:00.652 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.652 { 00:07:00.652 "nbd_device": "/dev/nbd0", 00:07:00.653 "bdev_name": "raid" 00:07:00.653 } 00:07:00.653 ]' 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:00.653 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:00.913 4096+0 records in 00:07:00.913 4096+0 records out 00:07:00.913 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0315995 s, 66.4 MB/s 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:00.913 4096+0 records in 00:07:00.913 4096+0 records out 00:07:00.913 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.204152 s, 10.3 MB/s 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:00.913 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:01.173 128+0 records in 00:07:01.173 128+0 records out 00:07:01.173 65536 bytes (66 kB, 64 KiB) copied, 0.00109915 s, 59.6 MB/s 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:01.173 2035+0 records in 00:07:01.173 2035+0 records out 00:07:01.173 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0150743 s, 69.1 MB/s 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:01.173 456+0 records in 00:07:01.173 456+0 records out 00:07:01.173 233472 bytes (233 kB, 228 KiB) copied, 0.00261084 s, 89.4 MB/s 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.173 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.433 [2024-11-26 12:50:18.898923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.433 12:50:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:01.433 12:50:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.433 12:50:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.433 12:50:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72125 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 72125 ']' 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 72125 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72125 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.692 killing process with pid 72125 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72125' 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 72125 00:07:01.692 [2024-11-26 12:50:19.209353] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.692 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 72125 00:07:01.692 [2024-11-26 12:50:19.209511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.692 [2024-11-26 12:50:19.209576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.692 [2024-11-26 12:50:19.209595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:01.692 [2024-11-26 12:50:19.252158] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.950 12:50:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:01.950 00:07:01.950 real 0m2.818s 00:07:01.950 user 0m3.276s 00:07:01.950 sys 0m1.031s 00:07:01.950 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.950 12:50:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:01.951 ************************************ 00:07:01.951 END TEST raid_function_test_concat 00:07:01.951 ************************************ 00:07:02.210 12:50:19 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:02.210 12:50:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.210 12:50:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.210 12:50:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.210 ************************************ 00:07:02.210 START TEST raid0_resize_test 00:07:02.210 ************************************ 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72232 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72232' 00:07:02.211 Process raid pid: 72232 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72232 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72232 ']' 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.211 12:50:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.211 [2024-11-26 12:50:19.771737] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:02.211 [2024-11-26 12:50:19.771869] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.471 [2024-11-26 12:50:19.915367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.471 [2024-11-26 12:50:19.986708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.471 [2024-11-26 12:50:20.062640] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.471 [2024-11-26 12:50:20.062681] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.041 Base_1 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.041 Base_2 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.041 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.041 [2024-11-26 12:50:20.637519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:03.041 [2024-11-26 12:50:20.639564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:03.041 [2024-11-26 12:50:20.639620] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:03.041 [2024-11-26 12:50:20.639631] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:03.041 [2024-11-26 12:50:20.639881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:03.041 [2024-11-26 12:50:20.640005] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:03.041 [2024-11-26 12:50:20.640026] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:03.041 [2024-11-26 12:50:20.640141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.042 [2024-11-26 12:50:20.649459] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.042 [2024-11-26 12:50:20.649484] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:03.042 true 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.042 [2024-11-26 12:50:20.665636] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.042 [2024-11-26 12:50:20.709363] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.042 [2024-11-26 12:50:20.709386] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:03.042 [2024-11-26 12:50:20.709410] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:03.042 true 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.042 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.301 [2024-11-26 12:50:20.725521] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.301 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72232 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72232 ']' 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72232 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72232 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72232' 00:07:03.302 killing process with pid 72232 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72232 00:07:03.302 [2024-11-26 12:50:20.806299] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.302 [2024-11-26 12:50:20.806384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.302 [2024-11-26 12:50:20.806438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.302 [2024-11-26 12:50:20.806452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:03.302 12:50:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72232 00:07:03.302 [2024-11-26 12:50:20.808422] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.569 12:50:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:03.569 00:07:03.569 real 0m1.491s 00:07:03.569 user 0m1.611s 00:07:03.569 sys 0m0.356s 00:07:03.569 12:50:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.569 12:50:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.569 ************************************ 00:07:03.569 END TEST raid0_resize_test 00:07:03.569 ************************************ 00:07:03.569 12:50:21 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:03.569 12:50:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.569 12:50:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.569 12:50:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.829 ************************************ 00:07:03.829 START TEST raid1_resize_test 00:07:03.829 ************************************ 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72288 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.829 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72288' 00:07:03.830 Process raid pid: 72288 00:07:03.830 12:50:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72288 00:07:03.830 12:50:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72288 ']' 00:07:03.830 12:50:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.830 12:50:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.830 12:50:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.830 12:50:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.830 12:50:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.830 [2024-11-26 12:50:21.335054] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:03.830 [2024-11-26 12:50:21.335169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.830 [2024-11-26 12:50:21.481000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.089 [2024-11-26 12:50:21.550440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.089 [2024-11-26 12:50:21.625738] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.089 [2024-11-26 12:50:21.625777] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.661 Base_1 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.661 Base_2 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.661 [2024-11-26 12:50:22.180609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:04.661 [2024-11-26 12:50:22.182667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:04.661 [2024-11-26 12:50:22.182733] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:04.661 [2024-11-26 12:50:22.182745] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:04.661 [2024-11-26 12:50:22.183010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:04.661 [2024-11-26 12:50:22.183133] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:04.661 [2024-11-26 12:50:22.183147] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:04.661 [2024-11-26 12:50:22.183296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.661 [2024-11-26 12:50:22.192555] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.661 [2024-11-26 12:50:22.192588] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:04.661 true 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.661 [2024-11-26 12:50:22.208726] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.661 [2024-11-26 12:50:22.252458] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.661 [2024-11-26 12:50:22.252480] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:04.661 [2024-11-26 12:50:22.252502] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:04.661 true 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.661 [2024-11-26 12:50:22.268595] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72288 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72288 ']' 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72288 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72288 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.661 killing process with pid 72288 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72288' 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72288 00:07:04.661 [2024-11-26 12:50:22.335635] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.661 [2024-11-26 12:50:22.335719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.661 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72288 00:07:04.661 [2024-11-26 12:50:22.336144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.661 [2024-11-26 12:50:22.336163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:04.921 [2024-11-26 12:50:22.337826] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.183 12:50:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:05.183 00:07:05.183 real 0m1.459s 00:07:05.183 user 0m1.534s 00:07:05.183 sys 0m0.372s 00:07:05.183 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.183 12:50:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.183 ************************************ 00:07:05.183 END TEST raid1_resize_test 00:07:05.183 ************************************ 00:07:05.183 12:50:22 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:05.183 12:50:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:05.183 12:50:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:05.183 12:50:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:05.183 12:50:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.183 12:50:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.183 ************************************ 00:07:05.183 START TEST raid_state_function_test 00:07:05.183 ************************************ 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72334 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:05.183 Process raid pid: 72334 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72334' 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72334 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72334 ']' 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.183 12:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.444 [2024-11-26 12:50:22.869301] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:05.444 [2024-11-26 12:50:22.869769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.444 [2024-11-26 12:50:23.032412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.444 [2024-11-26 12:50:23.102994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.703 [2024-11-26 12:50:23.179108] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.703 [2024-11-26 12:50:23.179152] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.273 12:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.273 12:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:06.273 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.273 12:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.273 12:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.273 [2024-11-26 12:50:23.687424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:06.273 [2024-11-26 12:50:23.687499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:06.273 [2024-11-26 12:50:23.687514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.273 [2024-11-26 12:50:23.687525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.273 12:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.273 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.273 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.274 "name": "Existed_Raid", 00:07:06.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.274 "strip_size_kb": 64, 00:07:06.274 "state": "configuring", 00:07:06.274 "raid_level": "raid0", 00:07:06.274 "superblock": false, 00:07:06.274 "num_base_bdevs": 2, 00:07:06.274 "num_base_bdevs_discovered": 0, 00:07:06.274 "num_base_bdevs_operational": 2, 00:07:06.274 "base_bdevs_list": [ 00:07:06.274 { 00:07:06.274 "name": "BaseBdev1", 00:07:06.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.274 "is_configured": false, 00:07:06.274 "data_offset": 0, 00:07:06.274 "data_size": 0 00:07:06.274 }, 00:07:06.274 { 00:07:06.274 "name": "BaseBdev2", 00:07:06.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.274 "is_configured": false, 00:07:06.274 "data_offset": 0, 00:07:06.274 "data_size": 0 00:07:06.274 } 00:07:06.274 ] 00:07:06.274 }' 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.274 12:50:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.534 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:06.534 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.534 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.534 [2024-11-26 12:50:24.110578] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:06.534 [2024-11-26 12:50:24.110643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:06.534 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.534 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.534 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.534 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.534 [2024-11-26 12:50:24.122573] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:06.534 [2024-11-26 12:50:24.122619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:06.534 [2024-11-26 12:50:24.122628] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.535 [2024-11-26 12:50:24.122637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.535 [2024-11-26 12:50:24.149702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.535 BaseBdev1 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.535 [ 00:07:06.535 { 00:07:06.535 "name": "BaseBdev1", 00:07:06.535 "aliases": [ 00:07:06.535 "9ae9d96c-cf6f-4e53-aa3d-63df16e33574" 00:07:06.535 ], 00:07:06.535 "product_name": "Malloc disk", 00:07:06.535 "block_size": 512, 00:07:06.535 "num_blocks": 65536, 00:07:06.535 "uuid": "9ae9d96c-cf6f-4e53-aa3d-63df16e33574", 00:07:06.535 "assigned_rate_limits": { 00:07:06.535 "rw_ios_per_sec": 0, 00:07:06.535 "rw_mbytes_per_sec": 0, 00:07:06.535 "r_mbytes_per_sec": 0, 00:07:06.535 "w_mbytes_per_sec": 0 00:07:06.535 }, 00:07:06.535 "claimed": true, 00:07:06.535 "claim_type": "exclusive_write", 00:07:06.535 "zoned": false, 00:07:06.535 "supported_io_types": { 00:07:06.535 "read": true, 00:07:06.535 "write": true, 00:07:06.535 "unmap": true, 00:07:06.535 "flush": true, 00:07:06.535 "reset": true, 00:07:06.535 "nvme_admin": false, 00:07:06.535 "nvme_io": false, 00:07:06.535 "nvme_io_md": false, 00:07:06.535 "write_zeroes": true, 00:07:06.535 "zcopy": true, 00:07:06.535 "get_zone_info": false, 00:07:06.535 "zone_management": false, 00:07:06.535 "zone_append": false, 00:07:06.535 "compare": false, 00:07:06.535 "compare_and_write": false, 00:07:06.535 "abort": true, 00:07:06.535 "seek_hole": false, 00:07:06.535 "seek_data": false, 00:07:06.535 "copy": true, 00:07:06.535 "nvme_iov_md": false 00:07:06.535 }, 00:07:06.535 "memory_domains": [ 00:07:06.535 { 00:07:06.535 "dma_device_id": "system", 00:07:06.535 "dma_device_type": 1 00:07:06.535 }, 00:07:06.535 { 00:07:06.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.535 "dma_device_type": 2 00:07:06.535 } 00:07:06.535 ], 00:07:06.535 "driver_specific": {} 00:07:06.535 } 00:07:06.535 ] 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.535 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.795 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.795 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.795 "name": "Existed_Raid", 00:07:06.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.795 "strip_size_kb": 64, 00:07:06.795 "state": "configuring", 00:07:06.795 "raid_level": "raid0", 00:07:06.795 "superblock": false, 00:07:06.795 "num_base_bdevs": 2, 00:07:06.795 "num_base_bdevs_discovered": 1, 00:07:06.795 "num_base_bdevs_operational": 2, 00:07:06.795 "base_bdevs_list": [ 00:07:06.795 { 00:07:06.795 "name": "BaseBdev1", 00:07:06.795 "uuid": "9ae9d96c-cf6f-4e53-aa3d-63df16e33574", 00:07:06.795 "is_configured": true, 00:07:06.795 "data_offset": 0, 00:07:06.795 "data_size": 65536 00:07:06.795 }, 00:07:06.795 { 00:07:06.795 "name": "BaseBdev2", 00:07:06.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.795 "is_configured": false, 00:07:06.795 "data_offset": 0, 00:07:06.795 "data_size": 0 00:07:06.795 } 00:07:06.795 ] 00:07:06.795 }' 00:07:06.795 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.795 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.055 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:07.055 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.055 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.055 [2024-11-26 12:50:24.644957] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:07.055 [2024-11-26 12:50:24.645032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:07.055 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.055 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:07.055 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.055 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.055 [2024-11-26 12:50:24.656939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:07.056 [2024-11-26 12:50:24.659139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.056 [2024-11-26 12:50:24.659196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.056 "name": "Existed_Raid", 00:07:07.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.056 "strip_size_kb": 64, 00:07:07.056 "state": "configuring", 00:07:07.056 "raid_level": "raid0", 00:07:07.056 "superblock": false, 00:07:07.056 "num_base_bdevs": 2, 00:07:07.056 "num_base_bdevs_discovered": 1, 00:07:07.056 "num_base_bdevs_operational": 2, 00:07:07.056 "base_bdevs_list": [ 00:07:07.056 { 00:07:07.056 "name": "BaseBdev1", 00:07:07.056 "uuid": "9ae9d96c-cf6f-4e53-aa3d-63df16e33574", 00:07:07.056 "is_configured": true, 00:07:07.056 "data_offset": 0, 00:07:07.056 "data_size": 65536 00:07:07.056 }, 00:07:07.056 { 00:07:07.056 "name": "BaseBdev2", 00:07:07.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.056 "is_configured": false, 00:07:07.056 "data_offset": 0, 00:07:07.056 "data_size": 0 00:07:07.056 } 00:07:07.056 ] 00:07:07.056 }' 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.056 12:50:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.626 [2024-11-26 12:50:25.103220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:07.626 [2024-11-26 12:50:25.103367] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:07.626 [2024-11-26 12:50:25.103419] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:07.626 [2024-11-26 12:50:25.103838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:07.626 [2024-11-26 12:50:25.104083] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:07.626 [2024-11-26 12:50:25.104141] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:07.626 [2024-11-26 12:50:25.104497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.626 BaseBdev2 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.626 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.626 [ 00:07:07.626 { 00:07:07.626 "name": "BaseBdev2", 00:07:07.626 "aliases": [ 00:07:07.626 "c14833fd-9fbb-4bb5-916e-527e22a7d57f" 00:07:07.626 ], 00:07:07.626 "product_name": "Malloc disk", 00:07:07.626 "block_size": 512, 00:07:07.626 "num_blocks": 65536, 00:07:07.626 "uuid": "c14833fd-9fbb-4bb5-916e-527e22a7d57f", 00:07:07.626 "assigned_rate_limits": { 00:07:07.626 "rw_ios_per_sec": 0, 00:07:07.626 "rw_mbytes_per_sec": 0, 00:07:07.626 "r_mbytes_per_sec": 0, 00:07:07.626 "w_mbytes_per_sec": 0 00:07:07.626 }, 00:07:07.626 "claimed": true, 00:07:07.626 "claim_type": "exclusive_write", 00:07:07.626 "zoned": false, 00:07:07.626 "supported_io_types": { 00:07:07.626 "read": true, 00:07:07.626 "write": true, 00:07:07.626 "unmap": true, 00:07:07.626 "flush": true, 00:07:07.627 "reset": true, 00:07:07.627 "nvme_admin": false, 00:07:07.627 "nvme_io": false, 00:07:07.627 "nvme_io_md": false, 00:07:07.627 "write_zeroes": true, 00:07:07.627 "zcopy": true, 00:07:07.627 "get_zone_info": false, 00:07:07.627 "zone_management": false, 00:07:07.627 "zone_append": false, 00:07:07.627 "compare": false, 00:07:07.627 "compare_and_write": false, 00:07:07.627 "abort": true, 00:07:07.627 "seek_hole": false, 00:07:07.627 "seek_data": false, 00:07:07.627 "copy": true, 00:07:07.627 "nvme_iov_md": false 00:07:07.627 }, 00:07:07.627 "memory_domains": [ 00:07:07.627 { 00:07:07.627 "dma_device_id": "system", 00:07:07.627 "dma_device_type": 1 00:07:07.627 }, 00:07:07.627 { 00:07:07.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.627 "dma_device_type": 2 00:07:07.627 } 00:07:07.627 ], 00:07:07.627 "driver_specific": {} 00:07:07.627 } 00:07:07.627 ] 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.627 "name": "Existed_Raid", 00:07:07.627 "uuid": "89a3e418-08e2-4f70-b2bd-c817828fe8c1", 00:07:07.627 "strip_size_kb": 64, 00:07:07.627 "state": "online", 00:07:07.627 "raid_level": "raid0", 00:07:07.627 "superblock": false, 00:07:07.627 "num_base_bdevs": 2, 00:07:07.627 "num_base_bdevs_discovered": 2, 00:07:07.627 "num_base_bdevs_operational": 2, 00:07:07.627 "base_bdevs_list": [ 00:07:07.627 { 00:07:07.627 "name": "BaseBdev1", 00:07:07.627 "uuid": "9ae9d96c-cf6f-4e53-aa3d-63df16e33574", 00:07:07.627 "is_configured": true, 00:07:07.627 "data_offset": 0, 00:07:07.627 "data_size": 65536 00:07:07.627 }, 00:07:07.627 { 00:07:07.627 "name": "BaseBdev2", 00:07:07.627 "uuid": "c14833fd-9fbb-4bb5-916e-527e22a7d57f", 00:07:07.627 "is_configured": true, 00:07:07.627 "data_offset": 0, 00:07:07.627 "data_size": 65536 00:07:07.627 } 00:07:07.627 ] 00:07:07.627 }' 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.627 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.197 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:08.197 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:08.197 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.197 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.197 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.197 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:08.197 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:08.197 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.197 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:08.197 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.197 [2024-11-26 12:50:25.618703] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:08.198 "name": "Existed_Raid", 00:07:08.198 "aliases": [ 00:07:08.198 "89a3e418-08e2-4f70-b2bd-c817828fe8c1" 00:07:08.198 ], 00:07:08.198 "product_name": "Raid Volume", 00:07:08.198 "block_size": 512, 00:07:08.198 "num_blocks": 131072, 00:07:08.198 "uuid": "89a3e418-08e2-4f70-b2bd-c817828fe8c1", 00:07:08.198 "assigned_rate_limits": { 00:07:08.198 "rw_ios_per_sec": 0, 00:07:08.198 "rw_mbytes_per_sec": 0, 00:07:08.198 "r_mbytes_per_sec": 0, 00:07:08.198 "w_mbytes_per_sec": 0 00:07:08.198 }, 00:07:08.198 "claimed": false, 00:07:08.198 "zoned": false, 00:07:08.198 "supported_io_types": { 00:07:08.198 "read": true, 00:07:08.198 "write": true, 00:07:08.198 "unmap": true, 00:07:08.198 "flush": true, 00:07:08.198 "reset": true, 00:07:08.198 "nvme_admin": false, 00:07:08.198 "nvme_io": false, 00:07:08.198 "nvme_io_md": false, 00:07:08.198 "write_zeroes": true, 00:07:08.198 "zcopy": false, 00:07:08.198 "get_zone_info": false, 00:07:08.198 "zone_management": false, 00:07:08.198 "zone_append": false, 00:07:08.198 "compare": false, 00:07:08.198 "compare_and_write": false, 00:07:08.198 "abort": false, 00:07:08.198 "seek_hole": false, 00:07:08.198 "seek_data": false, 00:07:08.198 "copy": false, 00:07:08.198 "nvme_iov_md": false 00:07:08.198 }, 00:07:08.198 "memory_domains": [ 00:07:08.198 { 00:07:08.198 "dma_device_id": "system", 00:07:08.198 "dma_device_type": 1 00:07:08.198 }, 00:07:08.198 { 00:07:08.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.198 "dma_device_type": 2 00:07:08.198 }, 00:07:08.198 { 00:07:08.198 "dma_device_id": "system", 00:07:08.198 "dma_device_type": 1 00:07:08.198 }, 00:07:08.198 { 00:07:08.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.198 "dma_device_type": 2 00:07:08.198 } 00:07:08.198 ], 00:07:08.198 "driver_specific": { 00:07:08.198 "raid": { 00:07:08.198 "uuid": "89a3e418-08e2-4f70-b2bd-c817828fe8c1", 00:07:08.198 "strip_size_kb": 64, 00:07:08.198 "state": "online", 00:07:08.198 "raid_level": "raid0", 00:07:08.198 "superblock": false, 00:07:08.198 "num_base_bdevs": 2, 00:07:08.198 "num_base_bdevs_discovered": 2, 00:07:08.198 "num_base_bdevs_operational": 2, 00:07:08.198 "base_bdevs_list": [ 00:07:08.198 { 00:07:08.198 "name": "BaseBdev1", 00:07:08.198 "uuid": "9ae9d96c-cf6f-4e53-aa3d-63df16e33574", 00:07:08.198 "is_configured": true, 00:07:08.198 "data_offset": 0, 00:07:08.198 "data_size": 65536 00:07:08.198 }, 00:07:08.198 { 00:07:08.198 "name": "BaseBdev2", 00:07:08.198 "uuid": "c14833fd-9fbb-4bb5-916e-527e22a7d57f", 00:07:08.198 "is_configured": true, 00:07:08.198 "data_offset": 0, 00:07:08.198 "data_size": 65536 00:07:08.198 } 00:07:08.198 ] 00:07:08.198 } 00:07:08.198 } 00:07:08.198 }' 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:08.198 BaseBdev2' 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.198 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.198 [2024-11-26 12:50:25.865998] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:08.198 [2024-11-26 12:50:25.866036] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.198 [2024-11-26 12:50:25.866097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.458 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.459 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.459 "name": "Existed_Raid", 00:07:08.459 "uuid": "89a3e418-08e2-4f70-b2bd-c817828fe8c1", 00:07:08.459 "strip_size_kb": 64, 00:07:08.459 "state": "offline", 00:07:08.459 "raid_level": "raid0", 00:07:08.459 "superblock": false, 00:07:08.459 "num_base_bdevs": 2, 00:07:08.459 "num_base_bdevs_discovered": 1, 00:07:08.459 "num_base_bdevs_operational": 1, 00:07:08.459 "base_bdevs_list": [ 00:07:08.459 { 00:07:08.459 "name": null, 00:07:08.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.459 "is_configured": false, 00:07:08.459 "data_offset": 0, 00:07:08.459 "data_size": 65536 00:07:08.459 }, 00:07:08.459 { 00:07:08.459 "name": "BaseBdev2", 00:07:08.459 "uuid": "c14833fd-9fbb-4bb5-916e-527e22a7d57f", 00:07:08.459 "is_configured": true, 00:07:08.459 "data_offset": 0, 00:07:08.459 "data_size": 65536 00:07:08.459 } 00:07:08.459 ] 00:07:08.459 }' 00:07:08.459 12:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.459 12:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.718 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:08.718 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.718 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:08.718 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.718 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.718 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.718 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.719 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:08.719 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:08.719 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:08.719 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.719 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.719 [2024-11-26 12:50:26.394026] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:08.719 [2024-11-26 12:50:26.394080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72334 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72334 ']' 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72334 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72334 00:07:08.979 killing process with pid 72334 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72334' 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72334 00:07:08.979 [2024-11-26 12:50:26.504827] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.979 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72334 00:07:08.979 [2024-11-26 12:50:26.506383] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.239 12:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:09.239 00:07:09.239 real 0m4.090s 00:07:09.239 user 0m6.261s 00:07:09.239 sys 0m0.848s 00:07:09.239 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.239 12:50:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.239 ************************************ 00:07:09.239 END TEST raid_state_function_test 00:07:09.239 ************************************ 00:07:09.499 12:50:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:09.499 12:50:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:09.499 12:50:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.499 12:50:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.499 ************************************ 00:07:09.499 START TEST raid_state_function_test_sb 00:07:09.499 ************************************ 00:07:09.499 12:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:09.499 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:09.499 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:09.499 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72576 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72576' 00:07:09.500 Process raid pid: 72576 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72576 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72576 ']' 00:07:09.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.500 12:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.500 [2024-11-26 12:50:27.029353] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.500 [2024-11-26 12:50:27.029478] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.759 [2024-11-26 12:50:27.189076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.759 [2024-11-26 12:50:27.258959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.759 [2024-11-26 12:50:27.334577] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.759 [2024-11-26 12:50:27.334617] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.329 [2024-11-26 12:50:27.850318] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:10.329 [2024-11-26 12:50:27.850458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:10.329 [2024-11-26 12:50:27.850475] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.329 [2024-11-26 12:50:27.850486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.329 "name": "Existed_Raid", 00:07:10.329 "uuid": "c23b2830-fd08-4955-aa2f-63b2c7f0b74d", 00:07:10.329 "strip_size_kb": 64, 00:07:10.329 "state": "configuring", 00:07:10.329 "raid_level": "raid0", 00:07:10.329 "superblock": true, 00:07:10.329 "num_base_bdevs": 2, 00:07:10.329 "num_base_bdevs_discovered": 0, 00:07:10.329 "num_base_bdevs_operational": 2, 00:07:10.329 "base_bdevs_list": [ 00:07:10.329 { 00:07:10.329 "name": "BaseBdev1", 00:07:10.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.329 "is_configured": false, 00:07:10.329 "data_offset": 0, 00:07:10.329 "data_size": 0 00:07:10.329 }, 00:07:10.329 { 00:07:10.329 "name": "BaseBdev2", 00:07:10.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.329 "is_configured": false, 00:07:10.329 "data_offset": 0, 00:07:10.329 "data_size": 0 00:07:10.329 } 00:07:10.329 ] 00:07:10.329 }' 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.329 12:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.900 [2024-11-26 12:50:28.297442] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:10.900 [2024-11-26 12:50:28.297494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.900 [2024-11-26 12:50:28.309472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:10.900 [2024-11-26 12:50:28.309511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:10.900 [2024-11-26 12:50:28.309521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.900 [2024-11-26 12:50:28.309531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.900 [2024-11-26 12:50:28.336456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.900 BaseBdev1 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.900 [ 00:07:10.900 { 00:07:10.900 "name": "BaseBdev1", 00:07:10.900 "aliases": [ 00:07:10.900 "46da906a-3263-4d1f-b065-a792c0d7117e" 00:07:10.900 ], 00:07:10.900 "product_name": "Malloc disk", 00:07:10.900 "block_size": 512, 00:07:10.900 "num_blocks": 65536, 00:07:10.900 "uuid": "46da906a-3263-4d1f-b065-a792c0d7117e", 00:07:10.900 "assigned_rate_limits": { 00:07:10.900 "rw_ios_per_sec": 0, 00:07:10.900 "rw_mbytes_per_sec": 0, 00:07:10.900 "r_mbytes_per_sec": 0, 00:07:10.900 "w_mbytes_per_sec": 0 00:07:10.900 }, 00:07:10.900 "claimed": true, 00:07:10.900 "claim_type": "exclusive_write", 00:07:10.900 "zoned": false, 00:07:10.900 "supported_io_types": { 00:07:10.900 "read": true, 00:07:10.900 "write": true, 00:07:10.900 "unmap": true, 00:07:10.900 "flush": true, 00:07:10.900 "reset": true, 00:07:10.900 "nvme_admin": false, 00:07:10.900 "nvme_io": false, 00:07:10.900 "nvme_io_md": false, 00:07:10.900 "write_zeroes": true, 00:07:10.900 "zcopy": true, 00:07:10.900 "get_zone_info": false, 00:07:10.900 "zone_management": false, 00:07:10.900 "zone_append": false, 00:07:10.900 "compare": false, 00:07:10.900 "compare_and_write": false, 00:07:10.900 "abort": true, 00:07:10.900 "seek_hole": false, 00:07:10.900 "seek_data": false, 00:07:10.900 "copy": true, 00:07:10.900 "nvme_iov_md": false 00:07:10.900 }, 00:07:10.900 "memory_domains": [ 00:07:10.900 { 00:07:10.900 "dma_device_id": "system", 00:07:10.900 "dma_device_type": 1 00:07:10.900 }, 00:07:10.900 { 00:07:10.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.900 "dma_device_type": 2 00:07:10.900 } 00:07:10.900 ], 00:07:10.900 "driver_specific": {} 00:07:10.900 } 00:07:10.900 ] 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.900 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.901 "name": "Existed_Raid", 00:07:10.901 "uuid": "60254669-0096-4111-a81a-02f86a9e2bd7", 00:07:10.901 "strip_size_kb": 64, 00:07:10.901 "state": "configuring", 00:07:10.901 "raid_level": "raid0", 00:07:10.901 "superblock": true, 00:07:10.901 "num_base_bdevs": 2, 00:07:10.901 "num_base_bdevs_discovered": 1, 00:07:10.901 "num_base_bdevs_operational": 2, 00:07:10.901 "base_bdevs_list": [ 00:07:10.901 { 00:07:10.901 "name": "BaseBdev1", 00:07:10.901 "uuid": "46da906a-3263-4d1f-b065-a792c0d7117e", 00:07:10.901 "is_configured": true, 00:07:10.901 "data_offset": 2048, 00:07:10.901 "data_size": 63488 00:07:10.901 }, 00:07:10.901 { 00:07:10.901 "name": "BaseBdev2", 00:07:10.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.901 "is_configured": false, 00:07:10.901 "data_offset": 0, 00:07:10.901 "data_size": 0 00:07:10.901 } 00:07:10.901 ] 00:07:10.901 }' 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.901 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.161 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:11.161 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.161 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.161 [2024-11-26 12:50:28.835673] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.161 [2024-11-26 12:50:28.835836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.422 [2024-11-26 12:50:28.847702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.422 [2024-11-26 12:50:28.850123] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.422 [2024-11-26 12:50:28.850222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.422 "name": "Existed_Raid", 00:07:11.422 "uuid": "bedd7c97-5012-432c-99ce-35b0b58c3ce3", 00:07:11.422 "strip_size_kb": 64, 00:07:11.422 "state": "configuring", 00:07:11.422 "raid_level": "raid0", 00:07:11.422 "superblock": true, 00:07:11.422 "num_base_bdevs": 2, 00:07:11.422 "num_base_bdevs_discovered": 1, 00:07:11.422 "num_base_bdevs_operational": 2, 00:07:11.422 "base_bdevs_list": [ 00:07:11.422 { 00:07:11.422 "name": "BaseBdev1", 00:07:11.422 "uuid": "46da906a-3263-4d1f-b065-a792c0d7117e", 00:07:11.422 "is_configured": true, 00:07:11.422 "data_offset": 2048, 00:07:11.422 "data_size": 63488 00:07:11.422 }, 00:07:11.422 { 00:07:11.422 "name": "BaseBdev2", 00:07:11.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.422 "is_configured": false, 00:07:11.422 "data_offset": 0, 00:07:11.422 "data_size": 0 00:07:11.422 } 00:07:11.422 ] 00:07:11.422 }' 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.422 12:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.682 [2024-11-26 12:50:29.325249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:11.682 [2024-11-26 12:50:29.325511] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:11.682 [2024-11-26 12:50:29.325531] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:11.682 [2024-11-26 12:50:29.325904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:11.682 [2024-11-26 12:50:29.326072] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:11.682 [2024-11-26 12:50:29.326090] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:11.682 BaseBdev2 00:07:11.682 [2024-11-26 12:50:29.326254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:11.682 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:11.683 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.683 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.683 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.683 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:11.683 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.683 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.683 [ 00:07:11.683 { 00:07:11.683 "name": "BaseBdev2", 00:07:11.683 "aliases": [ 00:07:11.683 "6c3b5546-e8cc-44cf-b1bf-d930ba491df5" 00:07:11.683 ], 00:07:11.683 "product_name": "Malloc disk", 00:07:11.683 "block_size": 512, 00:07:11.683 "num_blocks": 65536, 00:07:11.683 "uuid": "6c3b5546-e8cc-44cf-b1bf-d930ba491df5", 00:07:11.683 "assigned_rate_limits": { 00:07:11.683 "rw_ios_per_sec": 0, 00:07:11.683 "rw_mbytes_per_sec": 0, 00:07:11.683 "r_mbytes_per_sec": 0, 00:07:11.683 "w_mbytes_per_sec": 0 00:07:11.683 }, 00:07:11.683 "claimed": true, 00:07:11.683 "claim_type": "exclusive_write", 00:07:11.683 "zoned": false, 00:07:11.683 "supported_io_types": { 00:07:11.683 "read": true, 00:07:11.943 "write": true, 00:07:11.943 "unmap": true, 00:07:11.943 "flush": true, 00:07:11.943 "reset": true, 00:07:11.943 "nvme_admin": false, 00:07:11.943 "nvme_io": false, 00:07:11.943 "nvme_io_md": false, 00:07:11.943 "write_zeroes": true, 00:07:11.943 "zcopy": true, 00:07:11.943 "get_zone_info": false, 00:07:11.943 "zone_management": false, 00:07:11.943 "zone_append": false, 00:07:11.943 "compare": false, 00:07:11.943 "compare_and_write": false, 00:07:11.943 "abort": true, 00:07:11.943 "seek_hole": false, 00:07:11.943 "seek_data": false, 00:07:11.943 "copy": true, 00:07:11.943 "nvme_iov_md": false 00:07:11.943 }, 00:07:11.943 "memory_domains": [ 00:07:11.943 { 00:07:11.943 "dma_device_id": "system", 00:07:11.943 "dma_device_type": 1 00:07:11.943 }, 00:07:11.943 { 00:07:11.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.943 "dma_device_type": 2 00:07:11.943 } 00:07:11.943 ], 00:07:11.943 "driver_specific": {} 00:07:11.943 } 00:07:11.943 ] 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.943 "name": "Existed_Raid", 00:07:11.943 "uuid": "bedd7c97-5012-432c-99ce-35b0b58c3ce3", 00:07:11.943 "strip_size_kb": 64, 00:07:11.943 "state": "online", 00:07:11.943 "raid_level": "raid0", 00:07:11.943 "superblock": true, 00:07:11.943 "num_base_bdevs": 2, 00:07:11.943 "num_base_bdevs_discovered": 2, 00:07:11.943 "num_base_bdevs_operational": 2, 00:07:11.943 "base_bdevs_list": [ 00:07:11.943 { 00:07:11.943 "name": "BaseBdev1", 00:07:11.943 "uuid": "46da906a-3263-4d1f-b065-a792c0d7117e", 00:07:11.943 "is_configured": true, 00:07:11.943 "data_offset": 2048, 00:07:11.943 "data_size": 63488 00:07:11.943 }, 00:07:11.943 { 00:07:11.943 "name": "BaseBdev2", 00:07:11.943 "uuid": "6c3b5546-e8cc-44cf-b1bf-d930ba491df5", 00:07:11.943 "is_configured": true, 00:07:11.943 "data_offset": 2048, 00:07:11.943 "data_size": 63488 00:07:11.943 } 00:07:11.943 ] 00:07:11.943 }' 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.943 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.203 [2024-11-26 12:50:29.804815] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.203 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:12.203 "name": "Existed_Raid", 00:07:12.203 "aliases": [ 00:07:12.203 "bedd7c97-5012-432c-99ce-35b0b58c3ce3" 00:07:12.203 ], 00:07:12.203 "product_name": "Raid Volume", 00:07:12.203 "block_size": 512, 00:07:12.203 "num_blocks": 126976, 00:07:12.203 "uuid": "bedd7c97-5012-432c-99ce-35b0b58c3ce3", 00:07:12.203 "assigned_rate_limits": { 00:07:12.203 "rw_ios_per_sec": 0, 00:07:12.203 "rw_mbytes_per_sec": 0, 00:07:12.203 "r_mbytes_per_sec": 0, 00:07:12.203 "w_mbytes_per_sec": 0 00:07:12.203 }, 00:07:12.203 "claimed": false, 00:07:12.203 "zoned": false, 00:07:12.203 "supported_io_types": { 00:07:12.204 "read": true, 00:07:12.204 "write": true, 00:07:12.204 "unmap": true, 00:07:12.204 "flush": true, 00:07:12.204 "reset": true, 00:07:12.204 "nvme_admin": false, 00:07:12.204 "nvme_io": false, 00:07:12.204 "nvme_io_md": false, 00:07:12.204 "write_zeroes": true, 00:07:12.204 "zcopy": false, 00:07:12.204 "get_zone_info": false, 00:07:12.204 "zone_management": false, 00:07:12.204 "zone_append": false, 00:07:12.204 "compare": false, 00:07:12.204 "compare_and_write": false, 00:07:12.204 "abort": false, 00:07:12.204 "seek_hole": false, 00:07:12.204 "seek_data": false, 00:07:12.204 "copy": false, 00:07:12.204 "nvme_iov_md": false 00:07:12.204 }, 00:07:12.204 "memory_domains": [ 00:07:12.204 { 00:07:12.204 "dma_device_id": "system", 00:07:12.204 "dma_device_type": 1 00:07:12.204 }, 00:07:12.204 { 00:07:12.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.204 "dma_device_type": 2 00:07:12.204 }, 00:07:12.204 { 00:07:12.204 "dma_device_id": "system", 00:07:12.204 "dma_device_type": 1 00:07:12.204 }, 00:07:12.204 { 00:07:12.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.204 "dma_device_type": 2 00:07:12.204 } 00:07:12.204 ], 00:07:12.204 "driver_specific": { 00:07:12.204 "raid": { 00:07:12.204 "uuid": "bedd7c97-5012-432c-99ce-35b0b58c3ce3", 00:07:12.204 "strip_size_kb": 64, 00:07:12.204 "state": "online", 00:07:12.204 "raid_level": "raid0", 00:07:12.204 "superblock": true, 00:07:12.204 "num_base_bdevs": 2, 00:07:12.204 "num_base_bdevs_discovered": 2, 00:07:12.204 "num_base_bdevs_operational": 2, 00:07:12.204 "base_bdevs_list": [ 00:07:12.204 { 00:07:12.204 "name": "BaseBdev1", 00:07:12.204 "uuid": "46da906a-3263-4d1f-b065-a792c0d7117e", 00:07:12.204 "is_configured": true, 00:07:12.204 "data_offset": 2048, 00:07:12.204 "data_size": 63488 00:07:12.204 }, 00:07:12.204 { 00:07:12.204 "name": "BaseBdev2", 00:07:12.204 "uuid": "6c3b5546-e8cc-44cf-b1bf-d930ba491df5", 00:07:12.204 "is_configured": true, 00:07:12.204 "data_offset": 2048, 00:07:12.204 "data_size": 63488 00:07:12.204 } 00:07:12.204 ] 00:07:12.204 } 00:07:12.204 } 00:07:12.204 }' 00:07:12.204 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:12.464 BaseBdev2' 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.464 12:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 [2024-11-26 12:50:30.052089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:12.464 [2024-11-26 12:50:30.052172] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.464 [2024-11-26 12:50:30.052248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.464 "name": "Existed_Raid", 00:07:12.464 "uuid": "bedd7c97-5012-432c-99ce-35b0b58c3ce3", 00:07:12.464 "strip_size_kb": 64, 00:07:12.464 "state": "offline", 00:07:12.464 "raid_level": "raid0", 00:07:12.464 "superblock": true, 00:07:12.464 "num_base_bdevs": 2, 00:07:12.464 "num_base_bdevs_discovered": 1, 00:07:12.464 "num_base_bdevs_operational": 1, 00:07:12.464 "base_bdevs_list": [ 00:07:12.464 { 00:07:12.464 "name": null, 00:07:12.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.464 "is_configured": false, 00:07:12.464 "data_offset": 0, 00:07:12.464 "data_size": 63488 00:07:12.464 }, 00:07:12.464 { 00:07:12.464 "name": "BaseBdev2", 00:07:12.464 "uuid": "6c3b5546-e8cc-44cf-b1bf-d930ba491df5", 00:07:12.464 "is_configured": true, 00:07:12.464 "data_offset": 2048, 00:07:12.464 "data_size": 63488 00:07:12.464 } 00:07:12.464 ] 00:07:12.464 }' 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.464 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.033 [2024-11-26 12:50:30.516319] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:13.033 [2024-11-26 12:50:30.516486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72576 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72576 ']' 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72576 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72576 00:07:13.033 killing process with pid 72576 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72576' 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72576 00:07:13.033 [2024-11-26 12:50:30.637101] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.033 12:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72576 00:07:13.033 [2024-11-26 12:50:30.638721] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.603 12:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:13.603 00:07:13.603 real 0m4.070s 00:07:13.603 user 0m6.202s 00:07:13.603 sys 0m0.879s 00:07:13.603 12:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.603 12:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.603 ************************************ 00:07:13.603 END TEST raid_state_function_test_sb 00:07:13.603 ************************************ 00:07:13.603 12:50:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:13.603 12:50:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:13.603 12:50:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.603 12:50:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.603 ************************************ 00:07:13.603 START TEST raid_superblock_test 00:07:13.603 ************************************ 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72817 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72817 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72817 ']' 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.603 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.603 [2024-11-26 12:50:31.160555] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.603 [2024-11-26 12:50:31.160756] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72817 ] 00:07:13.863 [2024-11-26 12:50:31.320671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.863 [2024-11-26 12:50:31.391311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.863 [2024-11-26 12:50:31.467773] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.863 [2024-11-26 12:50:31.467894] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.433 12:50:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.433 malloc1 00:07:14.433 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.433 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:14.433 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.433 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.433 [2024-11-26 12:50:32.010371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:14.433 [2024-11-26 12:50:32.010543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.433 [2024-11-26 12:50:32.010608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:14.433 [2024-11-26 12:50:32.010651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.433 [2024-11-26 12:50:32.013145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.433 [2024-11-26 12:50:32.013236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:14.433 pt1 00:07:14.433 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.434 malloc2 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.434 [2024-11-26 12:50:32.058552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:14.434 [2024-11-26 12:50:32.058612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.434 [2024-11-26 12:50:32.058629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:14.434 [2024-11-26 12:50:32.058641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.434 [2024-11-26 12:50:32.061005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.434 [2024-11-26 12:50:32.061042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:14.434 pt2 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.434 [2024-11-26 12:50:32.070584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:14.434 [2024-11-26 12:50:32.072662] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:14.434 [2024-11-26 12:50:32.072848] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:14.434 [2024-11-26 12:50:32.072866] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:14.434 [2024-11-26 12:50:32.073108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:14.434 [2024-11-26 12:50:32.073264] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:14.434 [2024-11-26 12:50:32.073276] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:14.434 [2024-11-26 12:50:32.073405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.434 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.694 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.694 "name": "raid_bdev1", 00:07:14.694 "uuid": "ee978bf9-4f72-4fdc-9912-2761fb60890b", 00:07:14.694 "strip_size_kb": 64, 00:07:14.694 "state": "online", 00:07:14.694 "raid_level": "raid0", 00:07:14.694 "superblock": true, 00:07:14.694 "num_base_bdevs": 2, 00:07:14.694 "num_base_bdevs_discovered": 2, 00:07:14.694 "num_base_bdevs_operational": 2, 00:07:14.694 "base_bdevs_list": [ 00:07:14.694 { 00:07:14.694 "name": "pt1", 00:07:14.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:14.694 "is_configured": true, 00:07:14.694 "data_offset": 2048, 00:07:14.694 "data_size": 63488 00:07:14.694 }, 00:07:14.694 { 00:07:14.694 "name": "pt2", 00:07:14.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.694 "is_configured": true, 00:07:14.694 "data_offset": 2048, 00:07:14.694 "data_size": 63488 00:07:14.694 } 00:07:14.694 ] 00:07:14.694 }' 00:07:14.694 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.694 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.954 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.955 [2024-11-26 12:50:32.478105] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:14.955 "name": "raid_bdev1", 00:07:14.955 "aliases": [ 00:07:14.955 "ee978bf9-4f72-4fdc-9912-2761fb60890b" 00:07:14.955 ], 00:07:14.955 "product_name": "Raid Volume", 00:07:14.955 "block_size": 512, 00:07:14.955 "num_blocks": 126976, 00:07:14.955 "uuid": "ee978bf9-4f72-4fdc-9912-2761fb60890b", 00:07:14.955 "assigned_rate_limits": { 00:07:14.955 "rw_ios_per_sec": 0, 00:07:14.955 "rw_mbytes_per_sec": 0, 00:07:14.955 "r_mbytes_per_sec": 0, 00:07:14.955 "w_mbytes_per_sec": 0 00:07:14.955 }, 00:07:14.955 "claimed": false, 00:07:14.955 "zoned": false, 00:07:14.955 "supported_io_types": { 00:07:14.955 "read": true, 00:07:14.955 "write": true, 00:07:14.955 "unmap": true, 00:07:14.955 "flush": true, 00:07:14.955 "reset": true, 00:07:14.955 "nvme_admin": false, 00:07:14.955 "nvme_io": false, 00:07:14.955 "nvme_io_md": false, 00:07:14.955 "write_zeroes": true, 00:07:14.955 "zcopy": false, 00:07:14.955 "get_zone_info": false, 00:07:14.955 "zone_management": false, 00:07:14.955 "zone_append": false, 00:07:14.955 "compare": false, 00:07:14.955 "compare_and_write": false, 00:07:14.955 "abort": false, 00:07:14.955 "seek_hole": false, 00:07:14.955 "seek_data": false, 00:07:14.955 "copy": false, 00:07:14.955 "nvme_iov_md": false 00:07:14.955 }, 00:07:14.955 "memory_domains": [ 00:07:14.955 { 00:07:14.955 "dma_device_id": "system", 00:07:14.955 "dma_device_type": 1 00:07:14.955 }, 00:07:14.955 { 00:07:14.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.955 "dma_device_type": 2 00:07:14.955 }, 00:07:14.955 { 00:07:14.955 "dma_device_id": "system", 00:07:14.955 "dma_device_type": 1 00:07:14.955 }, 00:07:14.955 { 00:07:14.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.955 "dma_device_type": 2 00:07:14.955 } 00:07:14.955 ], 00:07:14.955 "driver_specific": { 00:07:14.955 "raid": { 00:07:14.955 "uuid": "ee978bf9-4f72-4fdc-9912-2761fb60890b", 00:07:14.955 "strip_size_kb": 64, 00:07:14.955 "state": "online", 00:07:14.955 "raid_level": "raid0", 00:07:14.955 "superblock": true, 00:07:14.955 "num_base_bdevs": 2, 00:07:14.955 "num_base_bdevs_discovered": 2, 00:07:14.955 "num_base_bdevs_operational": 2, 00:07:14.955 "base_bdevs_list": [ 00:07:14.955 { 00:07:14.955 "name": "pt1", 00:07:14.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:14.955 "is_configured": true, 00:07:14.955 "data_offset": 2048, 00:07:14.955 "data_size": 63488 00:07:14.955 }, 00:07:14.955 { 00:07:14.955 "name": "pt2", 00:07:14.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.955 "is_configured": true, 00:07:14.955 "data_offset": 2048, 00:07:14.955 "data_size": 63488 00:07:14.955 } 00:07:14.955 ] 00:07:14.955 } 00:07:14.955 } 00:07:14.955 }' 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:14.955 pt2' 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.955 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:15.215 [2024-11-26 12:50:32.657717] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ee978bf9-4f72-4fdc-9912-2761fb60890b 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ee978bf9-4f72-4fdc-9912-2761fb60890b ']' 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.215 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.215 [2024-11-26 12:50:32.709396] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:15.215 [2024-11-26 12:50:32.709464] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.215 [2024-11-26 12:50:32.709569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.216 [2024-11-26 12:50:32.709645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.216 [2024-11-26 12:50:32.709700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.216 [2024-11-26 12:50:32.849251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:15.216 [2024-11-26 12:50:32.851394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:15.216 [2024-11-26 12:50:32.851508] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:15.216 [2024-11-26 12:50:32.851556] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:15.216 [2024-11-26 12:50:32.851571] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:15.216 [2024-11-26 12:50:32.851579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:15.216 request: 00:07:15.216 { 00:07:15.216 "name": "raid_bdev1", 00:07:15.216 "raid_level": "raid0", 00:07:15.216 "base_bdevs": [ 00:07:15.216 "malloc1", 00:07:15.216 "malloc2" 00:07:15.216 ], 00:07:15.216 "strip_size_kb": 64, 00:07:15.216 "superblock": false, 00:07:15.216 "method": "bdev_raid_create", 00:07:15.216 "req_id": 1 00:07:15.216 } 00:07:15.216 Got JSON-RPC error response 00:07:15.216 response: 00:07:15.216 { 00:07:15.216 "code": -17, 00:07:15.216 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:15.216 } 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.216 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.476 [2024-11-26 12:50:32.913087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:15.476 [2024-11-26 12:50:32.913171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.476 [2024-11-26 12:50:32.913232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:15.476 [2024-11-26 12:50:32.913259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.476 [2024-11-26 12:50:32.915628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.476 [2024-11-26 12:50:32.915696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:15.476 [2024-11-26 12:50:32.915779] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:15.476 [2024-11-26 12:50:32.915841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:15.476 pt1 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.476 "name": "raid_bdev1", 00:07:15.476 "uuid": "ee978bf9-4f72-4fdc-9912-2761fb60890b", 00:07:15.476 "strip_size_kb": 64, 00:07:15.476 "state": "configuring", 00:07:15.476 "raid_level": "raid0", 00:07:15.476 "superblock": true, 00:07:15.476 "num_base_bdevs": 2, 00:07:15.476 "num_base_bdevs_discovered": 1, 00:07:15.476 "num_base_bdevs_operational": 2, 00:07:15.476 "base_bdevs_list": [ 00:07:15.476 { 00:07:15.476 "name": "pt1", 00:07:15.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.476 "is_configured": true, 00:07:15.476 "data_offset": 2048, 00:07:15.476 "data_size": 63488 00:07:15.476 }, 00:07:15.476 { 00:07:15.476 "name": null, 00:07:15.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:15.476 "is_configured": false, 00:07:15.476 "data_offset": 2048, 00:07:15.476 "data_size": 63488 00:07:15.476 } 00:07:15.476 ] 00:07:15.476 }' 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.476 12:50:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.736 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:15.736 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:15.736 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:15.736 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:15.736 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.736 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.736 [2024-11-26 12:50:33.288464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:15.736 [2024-11-26 12:50:33.288525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.736 [2024-11-26 12:50:33.288550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:15.737 [2024-11-26 12:50:33.288558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.737 [2024-11-26 12:50:33.288978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.737 [2024-11-26 12:50:33.288994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:15.737 [2024-11-26 12:50:33.289062] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:15.737 [2024-11-26 12:50:33.289085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:15.737 [2024-11-26 12:50:33.289171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:15.737 [2024-11-26 12:50:33.289199] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.737 [2024-11-26 12:50:33.289441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:15.737 [2024-11-26 12:50:33.289553] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:15.737 [2024-11-26 12:50:33.289569] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:15.737 [2024-11-26 12:50:33.289664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.737 pt2 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.737 "name": "raid_bdev1", 00:07:15.737 "uuid": "ee978bf9-4f72-4fdc-9912-2761fb60890b", 00:07:15.737 "strip_size_kb": 64, 00:07:15.737 "state": "online", 00:07:15.737 "raid_level": "raid0", 00:07:15.737 "superblock": true, 00:07:15.737 "num_base_bdevs": 2, 00:07:15.737 "num_base_bdevs_discovered": 2, 00:07:15.737 "num_base_bdevs_operational": 2, 00:07:15.737 "base_bdevs_list": [ 00:07:15.737 { 00:07:15.737 "name": "pt1", 00:07:15.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.737 "is_configured": true, 00:07:15.737 "data_offset": 2048, 00:07:15.737 "data_size": 63488 00:07:15.737 }, 00:07:15.737 { 00:07:15.737 "name": "pt2", 00:07:15.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:15.737 "is_configured": true, 00:07:15.737 "data_offset": 2048, 00:07:15.737 "data_size": 63488 00:07:15.737 } 00:07:15.737 ] 00:07:15.737 }' 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.737 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:16.307 [2024-11-26 12:50:33.700150] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.307 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.307 "name": "raid_bdev1", 00:07:16.307 "aliases": [ 00:07:16.307 "ee978bf9-4f72-4fdc-9912-2761fb60890b" 00:07:16.307 ], 00:07:16.307 "product_name": "Raid Volume", 00:07:16.307 "block_size": 512, 00:07:16.308 "num_blocks": 126976, 00:07:16.308 "uuid": "ee978bf9-4f72-4fdc-9912-2761fb60890b", 00:07:16.308 "assigned_rate_limits": { 00:07:16.308 "rw_ios_per_sec": 0, 00:07:16.308 "rw_mbytes_per_sec": 0, 00:07:16.308 "r_mbytes_per_sec": 0, 00:07:16.308 "w_mbytes_per_sec": 0 00:07:16.308 }, 00:07:16.308 "claimed": false, 00:07:16.308 "zoned": false, 00:07:16.308 "supported_io_types": { 00:07:16.308 "read": true, 00:07:16.308 "write": true, 00:07:16.308 "unmap": true, 00:07:16.308 "flush": true, 00:07:16.308 "reset": true, 00:07:16.308 "nvme_admin": false, 00:07:16.308 "nvme_io": false, 00:07:16.308 "nvme_io_md": false, 00:07:16.308 "write_zeroes": true, 00:07:16.308 "zcopy": false, 00:07:16.308 "get_zone_info": false, 00:07:16.308 "zone_management": false, 00:07:16.308 "zone_append": false, 00:07:16.308 "compare": false, 00:07:16.308 "compare_and_write": false, 00:07:16.308 "abort": false, 00:07:16.308 "seek_hole": false, 00:07:16.308 "seek_data": false, 00:07:16.308 "copy": false, 00:07:16.308 "nvme_iov_md": false 00:07:16.308 }, 00:07:16.308 "memory_domains": [ 00:07:16.308 { 00:07:16.308 "dma_device_id": "system", 00:07:16.308 "dma_device_type": 1 00:07:16.308 }, 00:07:16.308 { 00:07:16.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.308 "dma_device_type": 2 00:07:16.308 }, 00:07:16.308 { 00:07:16.308 "dma_device_id": "system", 00:07:16.308 "dma_device_type": 1 00:07:16.308 }, 00:07:16.308 { 00:07:16.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.308 "dma_device_type": 2 00:07:16.308 } 00:07:16.308 ], 00:07:16.308 "driver_specific": { 00:07:16.308 "raid": { 00:07:16.308 "uuid": "ee978bf9-4f72-4fdc-9912-2761fb60890b", 00:07:16.308 "strip_size_kb": 64, 00:07:16.308 "state": "online", 00:07:16.308 "raid_level": "raid0", 00:07:16.308 "superblock": true, 00:07:16.308 "num_base_bdevs": 2, 00:07:16.308 "num_base_bdevs_discovered": 2, 00:07:16.308 "num_base_bdevs_operational": 2, 00:07:16.308 "base_bdevs_list": [ 00:07:16.308 { 00:07:16.308 "name": "pt1", 00:07:16.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.308 "is_configured": true, 00:07:16.308 "data_offset": 2048, 00:07:16.308 "data_size": 63488 00:07:16.308 }, 00:07:16.308 { 00:07:16.308 "name": "pt2", 00:07:16.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.308 "is_configured": true, 00:07:16.308 "data_offset": 2048, 00:07:16.308 "data_size": 63488 00:07:16.308 } 00:07:16.308 ] 00:07:16.308 } 00:07:16.308 } 00:07:16.308 }' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:16.308 pt2' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.308 [2024-11-26 12:50:33.923656] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ee978bf9-4f72-4fdc-9912-2761fb60890b '!=' ee978bf9-4f72-4fdc-9912-2761fb60890b ']' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72817 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72817 ']' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72817 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.308 12:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72817 00:07:16.568 killing process with pid 72817 00:07:16.569 12:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.569 12:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.569 12:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72817' 00:07:16.569 12:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72817 00:07:16.569 [2024-11-26 12:50:34.005879] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.569 [2024-11-26 12:50:34.005978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.569 [2024-11-26 12:50:34.006037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.569 [2024-11-26 12:50:34.006048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:16.569 12:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72817 00:07:16.569 [2024-11-26 12:50:34.049580] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.829 12:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:16.829 00:07:16.829 real 0m3.349s 00:07:16.829 user 0m4.931s 00:07:16.829 sys 0m0.747s 00:07:16.829 ************************************ 00:07:16.829 END TEST raid_superblock_test 00:07:16.829 ************************************ 00:07:16.829 12:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.829 12:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.829 12:50:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:16.829 12:50:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:16.829 12:50:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.829 12:50:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.829 ************************************ 00:07:16.829 START TEST raid_read_error_test 00:07:16.829 ************************************ 00:07:16.829 12:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:16.829 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:16.829 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:16.829 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:16.829 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:16.829 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.829 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3lRIHe0odT 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73018 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73018 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73018 ']' 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.089 12:50:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.089 [2024-11-26 12:50:34.603383] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:17.089 [2024-11-26 12:50:34.603562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73018 ] 00:07:17.089 [2024-11-26 12:50:34.748709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.349 [2024-11-26 12:50:34.817549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.349 [2024-11-26 12:50:34.893211] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.349 [2024-11-26 12:50:34.893253] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 BaseBdev1_malloc 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 true 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 [2024-11-26 12:50:35.467326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:17.920 [2024-11-26 12:50:35.467472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.920 [2024-11-26 12:50:35.467498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:17.920 [2024-11-26 12:50:35.467520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.920 [2024-11-26 12:50:35.469884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.920 [2024-11-26 12:50:35.469921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:17.920 BaseBdev1 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 BaseBdev2_malloc 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 true 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.920 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.921 [2024-11-26 12:50:35.530366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:17.921 [2024-11-26 12:50:35.530439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.921 [2024-11-26 12:50:35.530468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:17.921 [2024-11-26 12:50:35.530482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.921 [2024-11-26 12:50:35.534032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.921 [2024-11-26 12:50:35.534145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:17.921 BaseBdev2 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.921 [2024-11-26 12:50:35.542383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.921 [2024-11-26 12:50:35.544729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.921 [2024-11-26 12:50:35.544913] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:17.921 [2024-11-26 12:50:35.544948] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:17.921 [2024-11-26 12:50:35.545228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:17.921 [2024-11-26 12:50:35.545368] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:17.921 [2024-11-26 12:50:35.545382] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:17.921 [2024-11-26 12:50:35.545515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.921 "name": "raid_bdev1", 00:07:17.921 "uuid": "2046fe8d-eb83-4700-898d-42bce3a2853f", 00:07:17.921 "strip_size_kb": 64, 00:07:17.921 "state": "online", 00:07:17.921 "raid_level": "raid0", 00:07:17.921 "superblock": true, 00:07:17.921 "num_base_bdevs": 2, 00:07:17.921 "num_base_bdevs_discovered": 2, 00:07:17.921 "num_base_bdevs_operational": 2, 00:07:17.921 "base_bdevs_list": [ 00:07:17.921 { 00:07:17.921 "name": "BaseBdev1", 00:07:17.921 "uuid": "75710aec-2da9-574b-bab6-d86482cf46c3", 00:07:17.921 "is_configured": true, 00:07:17.921 "data_offset": 2048, 00:07:17.921 "data_size": 63488 00:07:17.921 }, 00:07:17.921 { 00:07:17.921 "name": "BaseBdev2", 00:07:17.921 "uuid": "240adb88-3a3e-59f3-83ba-d3a2caff1d61", 00:07:17.921 "is_configured": true, 00:07:17.921 "data_offset": 2048, 00:07:17.921 "data_size": 63488 00:07:17.921 } 00:07:17.921 ] 00:07:17.921 }' 00:07:17.921 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.181 12:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.441 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:18.441 12:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:18.441 [2024-11-26 12:50:36.081865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.384 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.643 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.643 "name": "raid_bdev1", 00:07:19.643 "uuid": "2046fe8d-eb83-4700-898d-42bce3a2853f", 00:07:19.643 "strip_size_kb": 64, 00:07:19.643 "state": "online", 00:07:19.643 "raid_level": "raid0", 00:07:19.643 "superblock": true, 00:07:19.643 "num_base_bdevs": 2, 00:07:19.643 "num_base_bdevs_discovered": 2, 00:07:19.643 "num_base_bdevs_operational": 2, 00:07:19.643 "base_bdevs_list": [ 00:07:19.643 { 00:07:19.643 "name": "BaseBdev1", 00:07:19.643 "uuid": "75710aec-2da9-574b-bab6-d86482cf46c3", 00:07:19.643 "is_configured": true, 00:07:19.643 "data_offset": 2048, 00:07:19.643 "data_size": 63488 00:07:19.643 }, 00:07:19.643 { 00:07:19.643 "name": "BaseBdev2", 00:07:19.643 "uuid": "240adb88-3a3e-59f3-83ba-d3a2caff1d61", 00:07:19.643 "is_configured": true, 00:07:19.643 "data_offset": 2048, 00:07:19.643 "data_size": 63488 00:07:19.643 } 00:07:19.643 ] 00:07:19.643 }' 00:07:19.643 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.643 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.901 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.901 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.901 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.901 [2024-11-26 12:50:37.417747] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.901 [2024-11-26 12:50:37.417851] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.901 [2024-11-26 12:50:37.420402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.901 [2024-11-26 12:50:37.420489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.901 [2024-11-26 12:50:37.420546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.901 [2024-11-26 12:50:37.420598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:19.901 { 00:07:19.901 "results": [ 00:07:19.901 { 00:07:19.901 "job": "raid_bdev1", 00:07:19.902 "core_mask": "0x1", 00:07:19.902 "workload": "randrw", 00:07:19.902 "percentage": 50, 00:07:19.902 "status": "finished", 00:07:19.902 "queue_depth": 1, 00:07:19.902 "io_size": 131072, 00:07:19.902 "runtime": 1.336487, 00:07:19.902 "iops": 15772.69363637656, 00:07:19.902 "mibps": 1971.58670454707, 00:07:19.902 "io_failed": 1, 00:07:19.902 "io_timeout": 0, 00:07:19.902 "avg_latency_us": 88.72722783342024, 00:07:19.902 "min_latency_us": 24.370305676855896, 00:07:19.902 "max_latency_us": 1387.989519650655 00:07:19.902 } 00:07:19.902 ], 00:07:19.902 "core_count": 1 00:07:19.902 } 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73018 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73018 ']' 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73018 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73018 00:07:19.902 killing process with pid 73018 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73018' 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73018 00:07:19.902 [2024-11-26 12:50:37.463473] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.902 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73018 00:07:19.902 [2024-11-26 12:50:37.490925] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.469 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3lRIHe0odT 00:07:20.469 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:20.469 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:20.469 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:20.469 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:20.469 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.469 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.469 ************************************ 00:07:20.469 END TEST raid_read_error_test 00:07:20.469 ************************************ 00:07:20.469 12:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:20.469 00:07:20.469 real 0m3.372s 00:07:20.469 user 0m4.118s 00:07:20.469 sys 0m0.597s 00:07:20.469 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.469 12:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.469 12:50:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:20.469 12:50:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:20.469 12:50:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.469 12:50:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.469 ************************************ 00:07:20.469 START TEST raid_write_error_test 00:07:20.469 ************************************ 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rbnTQMvtRE 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73152 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73152 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73152 ']' 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.469 12:50:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.469 [2024-11-26 12:50:38.048183] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:20.469 [2024-11-26 12:50:38.048411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73152 ] 00:07:20.727 [2024-11-26 12:50:38.208035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.727 [2024-11-26 12:50:38.253833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.727 [2024-11-26 12:50:38.296997] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.727 [2024-11-26 12:50:38.297116] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 BaseBdev1_malloc 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 true 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 [2024-11-26 12:50:38.903539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:21.295 [2024-11-26 12:50:38.903656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.295 [2024-11-26 12:50:38.903681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:21.295 [2024-11-26 12:50:38.903689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.295 [2024-11-26 12:50:38.905778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.295 [2024-11-26 12:50:38.905824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:21.295 BaseBdev1 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 BaseBdev2_malloc 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 true 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 [2024-11-26 12:50:38.960760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:21.295 [2024-11-26 12:50:38.960821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.295 [2024-11-26 12:50:38.960845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:21.295 [2024-11-26 12:50:38.960857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.295 [2024-11-26 12:50:38.963633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.295 [2024-11-26 12:50:38.963680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:21.295 BaseBdev2 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.295 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 [2024-11-26 12:50:38.972748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.554 [2024-11-26 12:50:38.974596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:21.554 [2024-11-26 12:50:38.974799] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:21.554 [2024-11-26 12:50:38.974848] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.554 [2024-11-26 12:50:38.975113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:21.554 [2024-11-26 12:50:38.975321] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:21.554 [2024-11-26 12:50:38.975372] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:21.554 [2024-11-26 12:50:38.975530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.554 12:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.554 12:50:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.554 12:50:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.554 "name": "raid_bdev1", 00:07:21.554 "uuid": "2abe820a-db9e-4d9f-a388-dbfe3638b8ba", 00:07:21.554 "strip_size_kb": 64, 00:07:21.554 "state": "online", 00:07:21.554 "raid_level": "raid0", 00:07:21.554 "superblock": true, 00:07:21.554 "num_base_bdevs": 2, 00:07:21.554 "num_base_bdevs_discovered": 2, 00:07:21.554 "num_base_bdevs_operational": 2, 00:07:21.554 "base_bdevs_list": [ 00:07:21.554 { 00:07:21.554 "name": "BaseBdev1", 00:07:21.554 "uuid": "bdc05c1f-ae01-5e3a-9e63-534d4b02b5f5", 00:07:21.554 "is_configured": true, 00:07:21.554 "data_offset": 2048, 00:07:21.555 "data_size": 63488 00:07:21.555 }, 00:07:21.555 { 00:07:21.555 "name": "BaseBdev2", 00:07:21.555 "uuid": "d5689307-f826-513c-876c-54609c37deb3", 00:07:21.555 "is_configured": true, 00:07:21.555 "data_offset": 2048, 00:07:21.555 "data_size": 63488 00:07:21.555 } 00:07:21.555 ] 00:07:21.555 }' 00:07:21.555 12:50:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.555 12:50:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.813 12:50:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:21.813 12:50:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:21.813 [2024-11-26 12:50:39.460292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.751 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.012 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.012 "name": "raid_bdev1", 00:07:23.012 "uuid": "2abe820a-db9e-4d9f-a388-dbfe3638b8ba", 00:07:23.012 "strip_size_kb": 64, 00:07:23.012 "state": "online", 00:07:23.012 "raid_level": "raid0", 00:07:23.012 "superblock": true, 00:07:23.012 "num_base_bdevs": 2, 00:07:23.012 "num_base_bdevs_discovered": 2, 00:07:23.012 "num_base_bdevs_operational": 2, 00:07:23.012 "base_bdevs_list": [ 00:07:23.012 { 00:07:23.012 "name": "BaseBdev1", 00:07:23.012 "uuid": "bdc05c1f-ae01-5e3a-9e63-534d4b02b5f5", 00:07:23.012 "is_configured": true, 00:07:23.012 "data_offset": 2048, 00:07:23.012 "data_size": 63488 00:07:23.012 }, 00:07:23.012 { 00:07:23.012 "name": "BaseBdev2", 00:07:23.012 "uuid": "d5689307-f826-513c-876c-54609c37deb3", 00:07:23.012 "is_configured": true, 00:07:23.012 "data_offset": 2048, 00:07:23.012 "data_size": 63488 00:07:23.012 } 00:07:23.012 ] 00:07:23.012 }' 00:07:23.012 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.012 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.273 [2024-11-26 12:50:40.867839] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:23.273 [2024-11-26 12:50:40.867945] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.273 [2024-11-26 12:50:40.870345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.273 [2024-11-26 12:50:40.870444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.273 [2024-11-26 12:50:40.870498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.273 [2024-11-26 12:50:40.870538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.273 { 00:07:23.273 "results": [ 00:07:23.273 { 00:07:23.273 "job": "raid_bdev1", 00:07:23.273 "core_mask": "0x1", 00:07:23.273 "workload": "randrw", 00:07:23.273 "percentage": 50, 00:07:23.273 "status": "finished", 00:07:23.273 "queue_depth": 1, 00:07:23.273 "io_size": 131072, 00:07:23.273 "runtime": 1.408677, 00:07:23.273 "iops": 17985.67024236216, 00:07:23.273 "mibps": 2248.20878029527, 00:07:23.273 "io_failed": 1, 00:07:23.273 "io_timeout": 0, 00:07:23.273 "avg_latency_us": 76.5907696995591, 00:07:23.273 "min_latency_us": 24.482096069868994, 00:07:23.273 "max_latency_us": 1359.3711790393013 00:07:23.273 } 00:07:23.273 ], 00:07:23.273 "core_count": 1 00:07:23.273 } 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73152 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73152 ']' 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73152 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73152 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.273 killing process with pid 73152 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73152' 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73152 00:07:23.273 [2024-11-26 12:50:40.916908] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.273 12:50:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73152 00:07:23.273 [2024-11-26 12:50:40.932416] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.533 12:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rbnTQMvtRE 00:07:23.533 12:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:23.533 12:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:23.533 12:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:23.533 12:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:23.534 12:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.534 12:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.534 ************************************ 00:07:23.534 END TEST raid_write_error_test 00:07:23.534 ************************************ 00:07:23.534 12:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:23.534 00:07:23.534 real 0m3.230s 00:07:23.534 user 0m4.094s 00:07:23.534 sys 0m0.512s 00:07:23.534 12:50:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.534 12:50:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.794 12:50:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:23.794 12:50:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:23.794 12:50:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:23.794 12:50:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.794 12:50:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.794 ************************************ 00:07:23.794 START TEST raid_state_function_test 00:07:23.794 ************************************ 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:23.794 Process raid pid: 73279 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73279 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73279' 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73279 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73279 ']' 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.794 12:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.794 [2024-11-26 12:50:41.338123] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:23.795 [2024-11-26 12:50:41.338293] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.054 [2024-11-26 12:50:41.496101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.054 [2024-11-26 12:50:41.541210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.054 [2024-11-26 12:50:41.582509] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.054 [2024-11-26 12:50:41.582623] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.625 [2024-11-26 12:50:42.187415] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.625 [2024-11-26 12:50:42.187522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.625 [2024-11-26 12:50:42.187555] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.625 [2024-11-26 12:50:42.187578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.625 "name": "Existed_Raid", 00:07:24.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.625 "strip_size_kb": 64, 00:07:24.625 "state": "configuring", 00:07:24.625 "raid_level": "concat", 00:07:24.625 "superblock": false, 00:07:24.625 "num_base_bdevs": 2, 00:07:24.625 "num_base_bdevs_discovered": 0, 00:07:24.625 "num_base_bdevs_operational": 2, 00:07:24.625 "base_bdevs_list": [ 00:07:24.625 { 00:07:24.625 "name": "BaseBdev1", 00:07:24.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.625 "is_configured": false, 00:07:24.625 "data_offset": 0, 00:07:24.625 "data_size": 0 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "name": "BaseBdev2", 00:07:24.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.625 "is_configured": false, 00:07:24.625 "data_offset": 0, 00:07:24.625 "data_size": 0 00:07:24.625 } 00:07:24.625 ] 00:07:24.625 }' 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.625 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.195 [2024-11-26 12:50:42.618581] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.195 [2024-11-26 12:50:42.618624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.195 [2024-11-26 12:50:42.630601] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.195 [2024-11-26 12:50:42.630683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.195 [2024-11-26 12:50:42.630695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.195 [2024-11-26 12:50:42.630704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.195 [2024-11-26 12:50:42.651323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.195 BaseBdev1 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.195 [ 00:07:25.195 { 00:07:25.195 "name": "BaseBdev1", 00:07:25.195 "aliases": [ 00:07:25.195 "427c324d-8ef6-46f8-9f30-82b1e1525663" 00:07:25.195 ], 00:07:25.195 "product_name": "Malloc disk", 00:07:25.195 "block_size": 512, 00:07:25.195 "num_blocks": 65536, 00:07:25.195 "uuid": "427c324d-8ef6-46f8-9f30-82b1e1525663", 00:07:25.195 "assigned_rate_limits": { 00:07:25.195 "rw_ios_per_sec": 0, 00:07:25.195 "rw_mbytes_per_sec": 0, 00:07:25.195 "r_mbytes_per_sec": 0, 00:07:25.195 "w_mbytes_per_sec": 0 00:07:25.195 }, 00:07:25.195 "claimed": true, 00:07:25.195 "claim_type": "exclusive_write", 00:07:25.195 "zoned": false, 00:07:25.195 "supported_io_types": { 00:07:25.195 "read": true, 00:07:25.195 "write": true, 00:07:25.195 "unmap": true, 00:07:25.195 "flush": true, 00:07:25.195 "reset": true, 00:07:25.195 "nvme_admin": false, 00:07:25.195 "nvme_io": false, 00:07:25.195 "nvme_io_md": false, 00:07:25.195 "write_zeroes": true, 00:07:25.195 "zcopy": true, 00:07:25.195 "get_zone_info": false, 00:07:25.195 "zone_management": false, 00:07:25.195 "zone_append": false, 00:07:25.195 "compare": false, 00:07:25.195 "compare_and_write": false, 00:07:25.195 "abort": true, 00:07:25.195 "seek_hole": false, 00:07:25.195 "seek_data": false, 00:07:25.195 "copy": true, 00:07:25.195 "nvme_iov_md": false 00:07:25.195 }, 00:07:25.195 "memory_domains": [ 00:07:25.195 { 00:07:25.195 "dma_device_id": "system", 00:07:25.195 "dma_device_type": 1 00:07:25.195 }, 00:07:25.195 { 00:07:25.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.195 "dma_device_type": 2 00:07:25.195 } 00:07:25.195 ], 00:07:25.195 "driver_specific": {} 00:07:25.195 } 00:07:25.195 ] 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.195 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.195 "name": "Existed_Raid", 00:07:25.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.195 "strip_size_kb": 64, 00:07:25.196 "state": "configuring", 00:07:25.196 "raid_level": "concat", 00:07:25.196 "superblock": false, 00:07:25.196 "num_base_bdevs": 2, 00:07:25.196 "num_base_bdevs_discovered": 1, 00:07:25.196 "num_base_bdevs_operational": 2, 00:07:25.196 "base_bdevs_list": [ 00:07:25.196 { 00:07:25.196 "name": "BaseBdev1", 00:07:25.196 "uuid": "427c324d-8ef6-46f8-9f30-82b1e1525663", 00:07:25.196 "is_configured": true, 00:07:25.196 "data_offset": 0, 00:07:25.196 "data_size": 65536 00:07:25.196 }, 00:07:25.196 { 00:07:25.196 "name": "BaseBdev2", 00:07:25.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.196 "is_configured": false, 00:07:25.196 "data_offset": 0, 00:07:25.196 "data_size": 0 00:07:25.196 } 00:07:25.196 ] 00:07:25.196 }' 00:07:25.196 12:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.196 12:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.765 [2024-11-26 12:50:43.162794] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.765 [2024-11-26 12:50:43.162843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.765 [2024-11-26 12:50:43.174811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.765 [2024-11-26 12:50:43.176689] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.765 [2024-11-26 12:50:43.176728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.765 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.766 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.766 "name": "Existed_Raid", 00:07:25.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.766 "strip_size_kb": 64, 00:07:25.766 "state": "configuring", 00:07:25.766 "raid_level": "concat", 00:07:25.766 "superblock": false, 00:07:25.766 "num_base_bdevs": 2, 00:07:25.766 "num_base_bdevs_discovered": 1, 00:07:25.766 "num_base_bdevs_operational": 2, 00:07:25.766 "base_bdevs_list": [ 00:07:25.766 { 00:07:25.766 "name": "BaseBdev1", 00:07:25.766 "uuid": "427c324d-8ef6-46f8-9f30-82b1e1525663", 00:07:25.766 "is_configured": true, 00:07:25.766 "data_offset": 0, 00:07:25.766 "data_size": 65536 00:07:25.766 }, 00:07:25.766 { 00:07:25.766 "name": "BaseBdev2", 00:07:25.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.766 "is_configured": false, 00:07:25.766 "data_offset": 0, 00:07:25.766 "data_size": 0 00:07:25.766 } 00:07:25.766 ] 00:07:25.766 }' 00:07:25.766 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.766 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.026 [2024-11-26 12:50:43.630376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.026 [2024-11-26 12:50:43.630574] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:26.026 [2024-11-26 12:50:43.630639] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:26.026 [2024-11-26 12:50:43.631365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:26.026 [2024-11-26 12:50:43.631793] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:26.026 [2024-11-26 12:50:43.631903] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:26.026 [2024-11-26 12:50:43.632433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.026 BaseBdev2 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.026 [ 00:07:26.026 { 00:07:26.026 "name": "BaseBdev2", 00:07:26.026 "aliases": [ 00:07:26.026 "1fbf8377-306b-419b-b4d3-dd45420101bf" 00:07:26.026 ], 00:07:26.026 "product_name": "Malloc disk", 00:07:26.026 "block_size": 512, 00:07:26.026 "num_blocks": 65536, 00:07:26.026 "uuid": "1fbf8377-306b-419b-b4d3-dd45420101bf", 00:07:26.026 "assigned_rate_limits": { 00:07:26.026 "rw_ios_per_sec": 0, 00:07:26.026 "rw_mbytes_per_sec": 0, 00:07:26.026 "r_mbytes_per_sec": 0, 00:07:26.026 "w_mbytes_per_sec": 0 00:07:26.026 }, 00:07:26.026 "claimed": true, 00:07:26.026 "claim_type": "exclusive_write", 00:07:26.026 "zoned": false, 00:07:26.026 "supported_io_types": { 00:07:26.026 "read": true, 00:07:26.026 "write": true, 00:07:26.026 "unmap": true, 00:07:26.026 "flush": true, 00:07:26.026 "reset": true, 00:07:26.026 "nvme_admin": false, 00:07:26.026 "nvme_io": false, 00:07:26.026 "nvme_io_md": false, 00:07:26.026 "write_zeroes": true, 00:07:26.026 "zcopy": true, 00:07:26.026 "get_zone_info": false, 00:07:26.026 "zone_management": false, 00:07:26.026 "zone_append": false, 00:07:26.026 "compare": false, 00:07:26.026 "compare_and_write": false, 00:07:26.026 "abort": true, 00:07:26.026 "seek_hole": false, 00:07:26.026 "seek_data": false, 00:07:26.026 "copy": true, 00:07:26.026 "nvme_iov_md": false 00:07:26.026 }, 00:07:26.026 "memory_domains": [ 00:07:26.026 { 00:07:26.026 "dma_device_id": "system", 00:07:26.026 "dma_device_type": 1 00:07:26.026 }, 00:07:26.026 { 00:07:26.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.026 "dma_device_type": 2 00:07:26.026 } 00:07:26.026 ], 00:07:26.026 "driver_specific": {} 00:07:26.026 } 00:07:26.026 ] 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.026 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.027 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.027 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.027 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.027 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.027 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.027 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.027 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.286 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.286 "name": "Existed_Raid", 00:07:26.286 "uuid": "2e09dfa5-f9ba-4b54-99a2-e6a08d49472b", 00:07:26.286 "strip_size_kb": 64, 00:07:26.286 "state": "online", 00:07:26.286 "raid_level": "concat", 00:07:26.286 "superblock": false, 00:07:26.286 "num_base_bdevs": 2, 00:07:26.286 "num_base_bdevs_discovered": 2, 00:07:26.286 "num_base_bdevs_operational": 2, 00:07:26.286 "base_bdevs_list": [ 00:07:26.286 { 00:07:26.286 "name": "BaseBdev1", 00:07:26.286 "uuid": "427c324d-8ef6-46f8-9f30-82b1e1525663", 00:07:26.286 "is_configured": true, 00:07:26.286 "data_offset": 0, 00:07:26.286 "data_size": 65536 00:07:26.286 }, 00:07:26.286 { 00:07:26.286 "name": "BaseBdev2", 00:07:26.286 "uuid": "1fbf8377-306b-419b-b4d3-dd45420101bf", 00:07:26.286 "is_configured": true, 00:07:26.286 "data_offset": 0, 00:07:26.286 "data_size": 65536 00:07:26.286 } 00:07:26.286 ] 00:07:26.286 }' 00:07:26.286 12:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.286 12:50:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.546 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:26.546 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:26.546 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.547 [2024-11-26 12:50:44.097816] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.547 "name": "Existed_Raid", 00:07:26.547 "aliases": [ 00:07:26.547 "2e09dfa5-f9ba-4b54-99a2-e6a08d49472b" 00:07:26.547 ], 00:07:26.547 "product_name": "Raid Volume", 00:07:26.547 "block_size": 512, 00:07:26.547 "num_blocks": 131072, 00:07:26.547 "uuid": "2e09dfa5-f9ba-4b54-99a2-e6a08d49472b", 00:07:26.547 "assigned_rate_limits": { 00:07:26.547 "rw_ios_per_sec": 0, 00:07:26.547 "rw_mbytes_per_sec": 0, 00:07:26.547 "r_mbytes_per_sec": 0, 00:07:26.547 "w_mbytes_per_sec": 0 00:07:26.547 }, 00:07:26.547 "claimed": false, 00:07:26.547 "zoned": false, 00:07:26.547 "supported_io_types": { 00:07:26.547 "read": true, 00:07:26.547 "write": true, 00:07:26.547 "unmap": true, 00:07:26.547 "flush": true, 00:07:26.547 "reset": true, 00:07:26.547 "nvme_admin": false, 00:07:26.547 "nvme_io": false, 00:07:26.547 "nvme_io_md": false, 00:07:26.547 "write_zeroes": true, 00:07:26.547 "zcopy": false, 00:07:26.547 "get_zone_info": false, 00:07:26.547 "zone_management": false, 00:07:26.547 "zone_append": false, 00:07:26.547 "compare": false, 00:07:26.547 "compare_and_write": false, 00:07:26.547 "abort": false, 00:07:26.547 "seek_hole": false, 00:07:26.547 "seek_data": false, 00:07:26.547 "copy": false, 00:07:26.547 "nvme_iov_md": false 00:07:26.547 }, 00:07:26.547 "memory_domains": [ 00:07:26.547 { 00:07:26.547 "dma_device_id": "system", 00:07:26.547 "dma_device_type": 1 00:07:26.547 }, 00:07:26.547 { 00:07:26.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.547 "dma_device_type": 2 00:07:26.547 }, 00:07:26.547 { 00:07:26.547 "dma_device_id": "system", 00:07:26.547 "dma_device_type": 1 00:07:26.547 }, 00:07:26.547 { 00:07:26.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.547 "dma_device_type": 2 00:07:26.547 } 00:07:26.547 ], 00:07:26.547 "driver_specific": { 00:07:26.547 "raid": { 00:07:26.547 "uuid": "2e09dfa5-f9ba-4b54-99a2-e6a08d49472b", 00:07:26.547 "strip_size_kb": 64, 00:07:26.547 "state": "online", 00:07:26.547 "raid_level": "concat", 00:07:26.547 "superblock": false, 00:07:26.547 "num_base_bdevs": 2, 00:07:26.547 "num_base_bdevs_discovered": 2, 00:07:26.547 "num_base_bdevs_operational": 2, 00:07:26.547 "base_bdevs_list": [ 00:07:26.547 { 00:07:26.547 "name": "BaseBdev1", 00:07:26.547 "uuid": "427c324d-8ef6-46f8-9f30-82b1e1525663", 00:07:26.547 "is_configured": true, 00:07:26.547 "data_offset": 0, 00:07:26.547 "data_size": 65536 00:07:26.547 }, 00:07:26.547 { 00:07:26.547 "name": "BaseBdev2", 00:07:26.547 "uuid": "1fbf8377-306b-419b-b4d3-dd45420101bf", 00:07:26.547 "is_configured": true, 00:07:26.547 "data_offset": 0, 00:07:26.547 "data_size": 65536 00:07:26.547 } 00:07:26.547 ] 00:07:26.547 } 00:07:26.547 } 00:07:26.547 }' 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:26.547 BaseBdev2' 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.547 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.807 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.808 [2024-11-26 12:50:44.317239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.808 [2024-11-26 12:50:44.317273] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.808 [2024-11-26 12:50:44.317321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.808 "name": "Existed_Raid", 00:07:26.808 "uuid": "2e09dfa5-f9ba-4b54-99a2-e6a08d49472b", 00:07:26.808 "strip_size_kb": 64, 00:07:26.808 "state": "offline", 00:07:26.808 "raid_level": "concat", 00:07:26.808 "superblock": false, 00:07:26.808 "num_base_bdevs": 2, 00:07:26.808 "num_base_bdevs_discovered": 1, 00:07:26.808 "num_base_bdevs_operational": 1, 00:07:26.808 "base_bdevs_list": [ 00:07:26.808 { 00:07:26.808 "name": null, 00:07:26.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.808 "is_configured": false, 00:07:26.808 "data_offset": 0, 00:07:26.808 "data_size": 65536 00:07:26.808 }, 00:07:26.808 { 00:07:26.808 "name": "BaseBdev2", 00:07:26.808 "uuid": "1fbf8377-306b-419b-b4d3-dd45420101bf", 00:07:26.808 "is_configured": true, 00:07:26.808 "data_offset": 0, 00:07:26.808 "data_size": 65536 00:07:26.808 } 00:07:26.808 ] 00:07:26.808 }' 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.808 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 [2024-11-26 12:50:44.807632] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:27.378 [2024-11-26 12:50:44.807736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73279 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73279 ']' 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73279 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73279 00:07:27.378 killing process with pid 73279 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73279' 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73279 00:07:27.378 [2024-11-26 12:50:44.902764] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.378 12:50:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73279 00:07:27.378 [2024-11-26 12:50:44.903755] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:27.639 00:07:27.639 real 0m3.899s 00:07:27.639 user 0m6.132s 00:07:27.639 sys 0m0.775s 00:07:27.639 ************************************ 00:07:27.639 END TEST raid_state_function_test 00:07:27.639 ************************************ 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.639 12:50:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:27.639 12:50:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:27.639 12:50:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.639 12:50:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.639 ************************************ 00:07:27.639 START TEST raid_state_function_test_sb 00:07:27.639 ************************************ 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:27.639 Process raid pid: 73521 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73521 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73521' 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73521 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73521 ']' 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.639 12:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.639 [2024-11-26 12:50:45.311874] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:27.639 [2024-11-26 12:50:45.312078] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.900 [2024-11-26 12:50:45.472377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.900 [2024-11-26 12:50:45.517871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.900 [2024-11-26 12:50:45.560788] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.900 [2024-11-26 12:50:45.560827] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.873 [2024-11-26 12:50:46.158727] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.873 [2024-11-26 12:50:46.158776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.873 [2024-11-26 12:50:46.158788] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.873 [2024-11-26 12:50:46.158798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.873 "name": "Existed_Raid", 00:07:28.873 "uuid": "9d5e6a81-091e-44ba-bf6a-9132554b2a7c", 00:07:28.873 "strip_size_kb": 64, 00:07:28.873 "state": "configuring", 00:07:28.873 "raid_level": "concat", 00:07:28.873 "superblock": true, 00:07:28.873 "num_base_bdevs": 2, 00:07:28.873 "num_base_bdevs_discovered": 0, 00:07:28.873 "num_base_bdevs_operational": 2, 00:07:28.873 "base_bdevs_list": [ 00:07:28.873 { 00:07:28.873 "name": "BaseBdev1", 00:07:28.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.873 "is_configured": false, 00:07:28.873 "data_offset": 0, 00:07:28.873 "data_size": 0 00:07:28.873 }, 00:07:28.873 { 00:07:28.873 "name": "BaseBdev2", 00:07:28.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.873 "is_configured": false, 00:07:28.873 "data_offset": 0, 00:07:28.873 "data_size": 0 00:07:28.873 } 00:07:28.873 ] 00:07:28.873 }' 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.873 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.145 [2024-11-26 12:50:46.609887] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.145 [2024-11-26 12:50:46.609977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.145 [2024-11-26 12:50:46.621885] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.145 [2024-11-26 12:50:46.621964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.145 [2024-11-26 12:50:46.621993] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.145 [2024-11-26 12:50:46.622015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.145 [2024-11-26 12:50:46.642784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.145 BaseBdev1 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.145 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.145 [ 00:07:29.145 { 00:07:29.145 "name": "BaseBdev1", 00:07:29.145 "aliases": [ 00:07:29.145 "e13520fd-0b38-4a41-9b7f-ae44cbdd1264" 00:07:29.145 ], 00:07:29.145 "product_name": "Malloc disk", 00:07:29.145 "block_size": 512, 00:07:29.145 "num_blocks": 65536, 00:07:29.145 "uuid": "e13520fd-0b38-4a41-9b7f-ae44cbdd1264", 00:07:29.145 "assigned_rate_limits": { 00:07:29.145 "rw_ios_per_sec": 0, 00:07:29.145 "rw_mbytes_per_sec": 0, 00:07:29.145 "r_mbytes_per_sec": 0, 00:07:29.145 "w_mbytes_per_sec": 0 00:07:29.145 }, 00:07:29.145 "claimed": true, 00:07:29.145 "claim_type": "exclusive_write", 00:07:29.145 "zoned": false, 00:07:29.145 "supported_io_types": { 00:07:29.145 "read": true, 00:07:29.145 "write": true, 00:07:29.145 "unmap": true, 00:07:29.145 "flush": true, 00:07:29.145 "reset": true, 00:07:29.145 "nvme_admin": false, 00:07:29.145 "nvme_io": false, 00:07:29.145 "nvme_io_md": false, 00:07:29.145 "write_zeroes": true, 00:07:29.145 "zcopy": true, 00:07:29.145 "get_zone_info": false, 00:07:29.145 "zone_management": false, 00:07:29.145 "zone_append": false, 00:07:29.146 "compare": false, 00:07:29.146 "compare_and_write": false, 00:07:29.146 "abort": true, 00:07:29.146 "seek_hole": false, 00:07:29.146 "seek_data": false, 00:07:29.146 "copy": true, 00:07:29.146 "nvme_iov_md": false 00:07:29.146 }, 00:07:29.146 "memory_domains": [ 00:07:29.146 { 00:07:29.146 "dma_device_id": "system", 00:07:29.146 "dma_device_type": 1 00:07:29.146 }, 00:07:29.146 { 00:07:29.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.146 "dma_device_type": 2 00:07:29.146 } 00:07:29.146 ], 00:07:29.146 "driver_specific": {} 00:07:29.146 } 00:07:29.146 ] 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.146 "name": "Existed_Raid", 00:07:29.146 "uuid": "c8a0412c-410b-4f08-acbe-043cf000d0ee", 00:07:29.146 "strip_size_kb": 64, 00:07:29.146 "state": "configuring", 00:07:29.146 "raid_level": "concat", 00:07:29.146 "superblock": true, 00:07:29.146 "num_base_bdevs": 2, 00:07:29.146 "num_base_bdevs_discovered": 1, 00:07:29.146 "num_base_bdevs_operational": 2, 00:07:29.146 "base_bdevs_list": [ 00:07:29.146 { 00:07:29.146 "name": "BaseBdev1", 00:07:29.146 "uuid": "e13520fd-0b38-4a41-9b7f-ae44cbdd1264", 00:07:29.146 "is_configured": true, 00:07:29.146 "data_offset": 2048, 00:07:29.146 "data_size": 63488 00:07:29.146 }, 00:07:29.146 { 00:07:29.146 "name": "BaseBdev2", 00:07:29.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.146 "is_configured": false, 00:07:29.146 "data_offset": 0, 00:07:29.146 "data_size": 0 00:07:29.146 } 00:07:29.146 ] 00:07:29.146 }' 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.146 12:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.717 [2024-11-26 12:50:47.090035] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.717 [2024-11-26 12:50:47.090120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.717 [2024-11-26 12:50:47.098066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.717 [2024-11-26 12:50:47.099921] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.717 [2024-11-26 12:50:47.099961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.717 "name": "Existed_Raid", 00:07:29.717 "uuid": "e80459c2-8c41-42a9-86f5-7fa6c9415239", 00:07:29.717 "strip_size_kb": 64, 00:07:29.717 "state": "configuring", 00:07:29.717 "raid_level": "concat", 00:07:29.717 "superblock": true, 00:07:29.717 "num_base_bdevs": 2, 00:07:29.717 "num_base_bdevs_discovered": 1, 00:07:29.717 "num_base_bdevs_operational": 2, 00:07:29.717 "base_bdevs_list": [ 00:07:29.717 { 00:07:29.717 "name": "BaseBdev1", 00:07:29.717 "uuid": "e13520fd-0b38-4a41-9b7f-ae44cbdd1264", 00:07:29.717 "is_configured": true, 00:07:29.717 "data_offset": 2048, 00:07:29.717 "data_size": 63488 00:07:29.717 }, 00:07:29.717 { 00:07:29.717 "name": "BaseBdev2", 00:07:29.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.717 "is_configured": false, 00:07:29.717 "data_offset": 0, 00:07:29.717 "data_size": 0 00:07:29.717 } 00:07:29.717 ] 00:07:29.717 }' 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.717 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.978 [2024-11-26 12:50:47.556054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.978 [2024-11-26 12:50:47.556367] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:29.978 [2024-11-26 12:50:47.556431] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.978 BaseBdev2 00:07:29.978 [2024-11-26 12:50:47.556799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:29.978 [2024-11-26 12:50:47.557013] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:29.978 [2024-11-26 12:50:47.557072] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:29.978 [2024-11-26 12:50:47.557268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.978 [ 00:07:29.978 { 00:07:29.978 "name": "BaseBdev2", 00:07:29.978 "aliases": [ 00:07:29.978 "5a5e2a8f-b49f-49b1-83c9-f3f2c6ea838c" 00:07:29.978 ], 00:07:29.978 "product_name": "Malloc disk", 00:07:29.978 "block_size": 512, 00:07:29.978 "num_blocks": 65536, 00:07:29.978 "uuid": "5a5e2a8f-b49f-49b1-83c9-f3f2c6ea838c", 00:07:29.978 "assigned_rate_limits": { 00:07:29.978 "rw_ios_per_sec": 0, 00:07:29.978 "rw_mbytes_per_sec": 0, 00:07:29.978 "r_mbytes_per_sec": 0, 00:07:29.978 "w_mbytes_per_sec": 0 00:07:29.978 }, 00:07:29.978 "claimed": true, 00:07:29.978 "claim_type": "exclusive_write", 00:07:29.978 "zoned": false, 00:07:29.978 "supported_io_types": { 00:07:29.978 "read": true, 00:07:29.978 "write": true, 00:07:29.978 "unmap": true, 00:07:29.978 "flush": true, 00:07:29.978 "reset": true, 00:07:29.978 "nvme_admin": false, 00:07:29.978 "nvme_io": false, 00:07:29.978 "nvme_io_md": false, 00:07:29.978 "write_zeroes": true, 00:07:29.978 "zcopy": true, 00:07:29.978 "get_zone_info": false, 00:07:29.978 "zone_management": false, 00:07:29.978 "zone_append": false, 00:07:29.978 "compare": false, 00:07:29.978 "compare_and_write": false, 00:07:29.978 "abort": true, 00:07:29.978 "seek_hole": false, 00:07:29.978 "seek_data": false, 00:07:29.978 "copy": true, 00:07:29.978 "nvme_iov_md": false 00:07:29.978 }, 00:07:29.978 "memory_domains": [ 00:07:29.978 { 00:07:29.978 "dma_device_id": "system", 00:07:29.978 "dma_device_type": 1 00:07:29.978 }, 00:07:29.978 { 00:07:29.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.978 "dma_device_type": 2 00:07:29.978 } 00:07:29.978 ], 00:07:29.978 "driver_specific": {} 00:07:29.978 } 00:07:29.978 ] 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.978 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.979 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.979 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.979 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.979 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.979 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.979 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.979 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.979 "name": "Existed_Raid", 00:07:29.979 "uuid": "e80459c2-8c41-42a9-86f5-7fa6c9415239", 00:07:29.979 "strip_size_kb": 64, 00:07:29.979 "state": "online", 00:07:29.979 "raid_level": "concat", 00:07:29.979 "superblock": true, 00:07:29.979 "num_base_bdevs": 2, 00:07:29.979 "num_base_bdevs_discovered": 2, 00:07:29.979 "num_base_bdevs_operational": 2, 00:07:29.979 "base_bdevs_list": [ 00:07:29.979 { 00:07:29.979 "name": "BaseBdev1", 00:07:29.979 "uuid": "e13520fd-0b38-4a41-9b7f-ae44cbdd1264", 00:07:29.979 "is_configured": true, 00:07:29.979 "data_offset": 2048, 00:07:29.979 "data_size": 63488 00:07:29.979 }, 00:07:29.979 { 00:07:29.979 "name": "BaseBdev2", 00:07:29.979 "uuid": "5a5e2a8f-b49f-49b1-83c9-f3f2c6ea838c", 00:07:29.979 "is_configured": true, 00:07:29.979 "data_offset": 2048, 00:07:29.979 "data_size": 63488 00:07:29.979 } 00:07:29.979 ] 00:07:29.979 }' 00:07:29.979 12:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.979 12:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.548 [2024-11-26 12:50:48.011538] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.548 "name": "Existed_Raid", 00:07:30.548 "aliases": [ 00:07:30.548 "e80459c2-8c41-42a9-86f5-7fa6c9415239" 00:07:30.548 ], 00:07:30.548 "product_name": "Raid Volume", 00:07:30.548 "block_size": 512, 00:07:30.548 "num_blocks": 126976, 00:07:30.548 "uuid": "e80459c2-8c41-42a9-86f5-7fa6c9415239", 00:07:30.548 "assigned_rate_limits": { 00:07:30.548 "rw_ios_per_sec": 0, 00:07:30.548 "rw_mbytes_per_sec": 0, 00:07:30.548 "r_mbytes_per_sec": 0, 00:07:30.548 "w_mbytes_per_sec": 0 00:07:30.548 }, 00:07:30.548 "claimed": false, 00:07:30.548 "zoned": false, 00:07:30.548 "supported_io_types": { 00:07:30.548 "read": true, 00:07:30.548 "write": true, 00:07:30.548 "unmap": true, 00:07:30.548 "flush": true, 00:07:30.548 "reset": true, 00:07:30.548 "nvme_admin": false, 00:07:30.548 "nvme_io": false, 00:07:30.548 "nvme_io_md": false, 00:07:30.548 "write_zeroes": true, 00:07:30.548 "zcopy": false, 00:07:30.548 "get_zone_info": false, 00:07:30.548 "zone_management": false, 00:07:30.548 "zone_append": false, 00:07:30.548 "compare": false, 00:07:30.548 "compare_and_write": false, 00:07:30.548 "abort": false, 00:07:30.548 "seek_hole": false, 00:07:30.548 "seek_data": false, 00:07:30.548 "copy": false, 00:07:30.548 "nvme_iov_md": false 00:07:30.548 }, 00:07:30.548 "memory_domains": [ 00:07:30.548 { 00:07:30.548 "dma_device_id": "system", 00:07:30.548 "dma_device_type": 1 00:07:30.548 }, 00:07:30.548 { 00:07:30.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.548 "dma_device_type": 2 00:07:30.548 }, 00:07:30.548 { 00:07:30.548 "dma_device_id": "system", 00:07:30.548 "dma_device_type": 1 00:07:30.548 }, 00:07:30.548 { 00:07:30.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.548 "dma_device_type": 2 00:07:30.548 } 00:07:30.548 ], 00:07:30.548 "driver_specific": { 00:07:30.548 "raid": { 00:07:30.548 "uuid": "e80459c2-8c41-42a9-86f5-7fa6c9415239", 00:07:30.548 "strip_size_kb": 64, 00:07:30.548 "state": "online", 00:07:30.548 "raid_level": "concat", 00:07:30.548 "superblock": true, 00:07:30.548 "num_base_bdevs": 2, 00:07:30.548 "num_base_bdevs_discovered": 2, 00:07:30.548 "num_base_bdevs_operational": 2, 00:07:30.548 "base_bdevs_list": [ 00:07:30.548 { 00:07:30.548 "name": "BaseBdev1", 00:07:30.548 "uuid": "e13520fd-0b38-4a41-9b7f-ae44cbdd1264", 00:07:30.548 "is_configured": true, 00:07:30.548 "data_offset": 2048, 00:07:30.548 "data_size": 63488 00:07:30.548 }, 00:07:30.548 { 00:07:30.548 "name": "BaseBdev2", 00:07:30.548 "uuid": "5a5e2a8f-b49f-49b1-83c9-f3f2c6ea838c", 00:07:30.548 "is_configured": true, 00:07:30.548 "data_offset": 2048, 00:07:30.548 "data_size": 63488 00:07:30.548 } 00:07:30.548 ] 00:07:30.548 } 00:07:30.548 } 00:07:30.548 }' 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:30.548 BaseBdev2' 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.548 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.808 [2024-11-26 12:50:48.267291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:30.808 [2024-11-26 12:50:48.267324] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.808 [2024-11-26 12:50:48.267371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.808 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.809 "name": "Existed_Raid", 00:07:30.809 "uuid": "e80459c2-8c41-42a9-86f5-7fa6c9415239", 00:07:30.809 "strip_size_kb": 64, 00:07:30.809 "state": "offline", 00:07:30.809 "raid_level": "concat", 00:07:30.809 "superblock": true, 00:07:30.809 "num_base_bdevs": 2, 00:07:30.809 "num_base_bdevs_discovered": 1, 00:07:30.809 "num_base_bdevs_operational": 1, 00:07:30.809 "base_bdevs_list": [ 00:07:30.809 { 00:07:30.809 "name": null, 00:07:30.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.809 "is_configured": false, 00:07:30.809 "data_offset": 0, 00:07:30.809 "data_size": 63488 00:07:30.809 }, 00:07:30.809 { 00:07:30.809 "name": "BaseBdev2", 00:07:30.809 "uuid": "5a5e2a8f-b49f-49b1-83c9-f3f2c6ea838c", 00:07:30.809 "is_configured": true, 00:07:30.809 "data_offset": 2048, 00:07:30.809 "data_size": 63488 00:07:30.809 } 00:07:30.809 ] 00:07:30.809 }' 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.809 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.069 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:31.069 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.069 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.069 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.069 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.069 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:31.069 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.329 [2024-11-26 12:50:48.761654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:31.329 [2024-11-26 12:50:48.761752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73521 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73521 ']' 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73521 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73521 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73521' 00:07:31.329 killing process with pid 73521 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73521 00:07:31.329 [2024-11-26 12:50:48.868534] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.329 12:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73521 00:07:31.329 [2024-11-26 12:50:48.869546] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.589 12:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:31.589 00:07:31.589 real 0m3.892s 00:07:31.589 user 0m6.078s 00:07:31.589 sys 0m0.791s 00:07:31.589 12:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.589 ************************************ 00:07:31.589 END TEST raid_state_function_test_sb 00:07:31.589 ************************************ 00:07:31.589 12:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.589 12:50:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:31.589 12:50:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:31.589 12:50:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.589 12:50:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.589 ************************************ 00:07:31.589 START TEST raid_superblock_test 00:07:31.589 ************************************ 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73757 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73757 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73757 ']' 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.589 12:50:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.851 [2024-11-26 12:50:49.275229] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:31.851 [2024-11-26 12:50:49.275368] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73757 ] 00:07:31.851 [2024-11-26 12:50:49.441092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.851 [2024-11-26 12:50:49.486226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.111 [2024-11-26 12:50:49.528338] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.111 [2024-11-26 12:50:49.528471] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.679 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.679 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:32.679 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:32.679 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.679 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:32.679 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:32.679 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:32.679 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.680 malloc1 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.680 [2024-11-26 12:50:50.110766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:32.680 [2024-11-26 12:50:50.110884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.680 [2024-11-26 12:50:50.110927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:32.680 [2024-11-26 12:50:50.110990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.680 [2024-11-26 12:50:50.113082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.680 [2024-11-26 12:50:50.113157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:32.680 pt1 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.680 malloc2 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.680 [2024-11-26 12:50:50.152793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:32.680 [2024-11-26 12:50:50.152901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.680 [2024-11-26 12:50:50.152939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:32.680 [2024-11-26 12:50:50.152973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.680 [2024-11-26 12:50:50.154997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.680 [2024-11-26 12:50:50.155082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:32.680 pt2 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.680 [2024-11-26 12:50:50.164813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:32.680 [2024-11-26 12:50:50.166619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:32.680 [2024-11-26 12:50:50.166749] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:32.680 [2024-11-26 12:50:50.166769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.680 [2024-11-26 12:50:50.167026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:32.680 [2024-11-26 12:50:50.167136] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:32.680 [2024-11-26 12:50:50.167145] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:32.680 [2024-11-26 12:50:50.167321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.680 "name": "raid_bdev1", 00:07:32.680 "uuid": "d160d9e3-3a51-4407-b666-355506a45a2f", 00:07:32.680 "strip_size_kb": 64, 00:07:32.680 "state": "online", 00:07:32.680 "raid_level": "concat", 00:07:32.680 "superblock": true, 00:07:32.680 "num_base_bdevs": 2, 00:07:32.680 "num_base_bdevs_discovered": 2, 00:07:32.680 "num_base_bdevs_operational": 2, 00:07:32.680 "base_bdevs_list": [ 00:07:32.680 { 00:07:32.680 "name": "pt1", 00:07:32.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.680 "is_configured": true, 00:07:32.680 "data_offset": 2048, 00:07:32.680 "data_size": 63488 00:07:32.680 }, 00:07:32.680 { 00:07:32.680 "name": "pt2", 00:07:32.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.680 "is_configured": true, 00:07:32.680 "data_offset": 2048, 00:07:32.680 "data_size": 63488 00:07:32.680 } 00:07:32.680 ] 00:07:32.680 }' 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.680 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.940 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.940 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.940 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.940 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.940 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.940 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.940 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.940 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.940 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.941 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 [2024-11-26 12:50:50.616313] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.201 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.201 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.201 "name": "raid_bdev1", 00:07:33.201 "aliases": [ 00:07:33.201 "d160d9e3-3a51-4407-b666-355506a45a2f" 00:07:33.201 ], 00:07:33.201 "product_name": "Raid Volume", 00:07:33.201 "block_size": 512, 00:07:33.201 "num_blocks": 126976, 00:07:33.201 "uuid": "d160d9e3-3a51-4407-b666-355506a45a2f", 00:07:33.201 "assigned_rate_limits": { 00:07:33.201 "rw_ios_per_sec": 0, 00:07:33.201 "rw_mbytes_per_sec": 0, 00:07:33.201 "r_mbytes_per_sec": 0, 00:07:33.201 "w_mbytes_per_sec": 0 00:07:33.201 }, 00:07:33.201 "claimed": false, 00:07:33.201 "zoned": false, 00:07:33.201 "supported_io_types": { 00:07:33.201 "read": true, 00:07:33.201 "write": true, 00:07:33.201 "unmap": true, 00:07:33.201 "flush": true, 00:07:33.201 "reset": true, 00:07:33.201 "nvme_admin": false, 00:07:33.201 "nvme_io": false, 00:07:33.201 "nvme_io_md": false, 00:07:33.201 "write_zeroes": true, 00:07:33.201 "zcopy": false, 00:07:33.201 "get_zone_info": false, 00:07:33.201 "zone_management": false, 00:07:33.201 "zone_append": false, 00:07:33.201 "compare": false, 00:07:33.201 "compare_and_write": false, 00:07:33.201 "abort": false, 00:07:33.201 "seek_hole": false, 00:07:33.201 "seek_data": false, 00:07:33.201 "copy": false, 00:07:33.201 "nvme_iov_md": false 00:07:33.201 }, 00:07:33.201 "memory_domains": [ 00:07:33.201 { 00:07:33.201 "dma_device_id": "system", 00:07:33.201 "dma_device_type": 1 00:07:33.201 }, 00:07:33.201 { 00:07:33.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.201 "dma_device_type": 2 00:07:33.201 }, 00:07:33.202 { 00:07:33.202 "dma_device_id": "system", 00:07:33.202 "dma_device_type": 1 00:07:33.202 }, 00:07:33.202 { 00:07:33.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.202 "dma_device_type": 2 00:07:33.202 } 00:07:33.202 ], 00:07:33.202 "driver_specific": { 00:07:33.202 "raid": { 00:07:33.202 "uuid": "d160d9e3-3a51-4407-b666-355506a45a2f", 00:07:33.202 "strip_size_kb": 64, 00:07:33.202 "state": "online", 00:07:33.202 "raid_level": "concat", 00:07:33.202 "superblock": true, 00:07:33.202 "num_base_bdevs": 2, 00:07:33.202 "num_base_bdevs_discovered": 2, 00:07:33.202 "num_base_bdevs_operational": 2, 00:07:33.202 "base_bdevs_list": [ 00:07:33.202 { 00:07:33.202 "name": "pt1", 00:07:33.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.202 "is_configured": true, 00:07:33.202 "data_offset": 2048, 00:07:33.202 "data_size": 63488 00:07:33.202 }, 00:07:33.202 { 00:07:33.202 "name": "pt2", 00:07:33.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.202 "is_configured": true, 00:07:33.202 "data_offset": 2048, 00:07:33.202 "data_size": 63488 00:07:33.202 } 00:07:33.202 ] 00:07:33.202 } 00:07:33.202 } 00:07:33.202 }' 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:33.202 pt2' 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.202 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.202 [2024-11-26 12:50:50.863761] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d160d9e3-3a51-4407-b666-355506a45a2f 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d160d9e3-3a51-4407-b666-355506a45a2f ']' 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.462 [2024-11-26 12:50:50.907463] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.462 [2024-11-26 12:50:50.907529] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.462 [2024-11-26 12:50:50.907608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.462 [2024-11-26 12:50:50.907685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.462 [2024-11-26 12:50:50.907766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.462 12:50:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.462 [2024-11-26 12:50:51.023320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:33.462 [2024-11-26 12:50:51.025126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:33.462 [2024-11-26 12:50:51.025246] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:33.462 [2024-11-26 12:50:51.025316] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:33.462 [2024-11-26 12:50:51.025365] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.462 [2024-11-26 12:50:51.025394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:33.462 request: 00:07:33.462 { 00:07:33.462 "name": "raid_bdev1", 00:07:33.462 "raid_level": "concat", 00:07:33.462 "base_bdevs": [ 00:07:33.462 "malloc1", 00:07:33.462 "malloc2" 00:07:33.462 ], 00:07:33.462 "strip_size_kb": 64, 00:07:33.462 "superblock": false, 00:07:33.462 "method": "bdev_raid_create", 00:07:33.462 "req_id": 1 00:07:33.462 } 00:07:33.462 Got JSON-RPC error response 00:07:33.462 response: 00:07:33.462 { 00:07:33.462 "code": -17, 00:07:33.462 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:33.462 } 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.462 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.462 [2024-11-26 12:50:51.091275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:33.462 [2024-11-26 12:50:51.091319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.462 [2024-11-26 12:50:51.091335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:33.462 [2024-11-26 12:50:51.091343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.463 [2024-11-26 12:50:51.093332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.463 [2024-11-26 12:50:51.093367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:33.463 [2024-11-26 12:50:51.093427] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:33.463 [2024-11-26 12:50:51.093462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:33.463 pt1 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.463 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.722 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.722 "name": "raid_bdev1", 00:07:33.722 "uuid": "d160d9e3-3a51-4407-b666-355506a45a2f", 00:07:33.722 "strip_size_kb": 64, 00:07:33.722 "state": "configuring", 00:07:33.722 "raid_level": "concat", 00:07:33.722 "superblock": true, 00:07:33.722 "num_base_bdevs": 2, 00:07:33.722 "num_base_bdevs_discovered": 1, 00:07:33.722 "num_base_bdevs_operational": 2, 00:07:33.722 "base_bdevs_list": [ 00:07:33.722 { 00:07:33.722 "name": "pt1", 00:07:33.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.722 "is_configured": true, 00:07:33.722 "data_offset": 2048, 00:07:33.722 "data_size": 63488 00:07:33.722 }, 00:07:33.722 { 00:07:33.722 "name": null, 00:07:33.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.722 "is_configured": false, 00:07:33.722 "data_offset": 2048, 00:07:33.722 "data_size": 63488 00:07:33.722 } 00:07:33.722 ] 00:07:33.722 }' 00:07:33.722 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.722 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.982 [2024-11-26 12:50:51.546528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:33.982 [2024-11-26 12:50:51.546632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.982 [2024-11-26 12:50:51.546658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:33.982 [2024-11-26 12:50:51.546668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.982 [2024-11-26 12:50:51.547024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.982 [2024-11-26 12:50:51.547041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:33.982 [2024-11-26 12:50:51.547105] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:33.982 [2024-11-26 12:50:51.547123] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:33.982 [2024-11-26 12:50:51.547245] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:33.982 [2024-11-26 12:50:51.547255] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.982 [2024-11-26 12:50:51.547485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:33.982 [2024-11-26 12:50:51.547585] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:33.982 [2024-11-26 12:50:51.547599] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:33.982 [2024-11-26 12:50:51.547689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.982 pt2 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.982 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.982 "name": "raid_bdev1", 00:07:33.982 "uuid": "d160d9e3-3a51-4407-b666-355506a45a2f", 00:07:33.982 "strip_size_kb": 64, 00:07:33.982 "state": "online", 00:07:33.982 "raid_level": "concat", 00:07:33.982 "superblock": true, 00:07:33.982 "num_base_bdevs": 2, 00:07:33.982 "num_base_bdevs_discovered": 2, 00:07:33.982 "num_base_bdevs_operational": 2, 00:07:33.983 "base_bdevs_list": [ 00:07:33.983 { 00:07:33.983 "name": "pt1", 00:07:33.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.983 "is_configured": true, 00:07:33.983 "data_offset": 2048, 00:07:33.983 "data_size": 63488 00:07:33.983 }, 00:07:33.983 { 00:07:33.983 "name": "pt2", 00:07:33.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.983 "is_configured": true, 00:07:33.983 "data_offset": 2048, 00:07:33.983 "data_size": 63488 00:07:33.983 } 00:07:33.983 ] 00:07:33.983 }' 00:07:33.983 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.983 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.551 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:34.551 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:34.551 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.551 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.551 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.552 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.552 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.552 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:34.552 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.552 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.552 [2024-11-26 12:50:51.966025] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.552 12:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.552 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.552 "name": "raid_bdev1", 00:07:34.552 "aliases": [ 00:07:34.552 "d160d9e3-3a51-4407-b666-355506a45a2f" 00:07:34.552 ], 00:07:34.552 "product_name": "Raid Volume", 00:07:34.552 "block_size": 512, 00:07:34.552 "num_blocks": 126976, 00:07:34.552 "uuid": "d160d9e3-3a51-4407-b666-355506a45a2f", 00:07:34.552 "assigned_rate_limits": { 00:07:34.552 "rw_ios_per_sec": 0, 00:07:34.552 "rw_mbytes_per_sec": 0, 00:07:34.552 "r_mbytes_per_sec": 0, 00:07:34.552 "w_mbytes_per_sec": 0 00:07:34.552 }, 00:07:34.552 "claimed": false, 00:07:34.552 "zoned": false, 00:07:34.552 "supported_io_types": { 00:07:34.552 "read": true, 00:07:34.552 "write": true, 00:07:34.552 "unmap": true, 00:07:34.552 "flush": true, 00:07:34.552 "reset": true, 00:07:34.552 "nvme_admin": false, 00:07:34.552 "nvme_io": false, 00:07:34.552 "nvme_io_md": false, 00:07:34.552 "write_zeroes": true, 00:07:34.552 "zcopy": false, 00:07:34.552 "get_zone_info": false, 00:07:34.552 "zone_management": false, 00:07:34.552 "zone_append": false, 00:07:34.552 "compare": false, 00:07:34.552 "compare_and_write": false, 00:07:34.552 "abort": false, 00:07:34.552 "seek_hole": false, 00:07:34.552 "seek_data": false, 00:07:34.552 "copy": false, 00:07:34.552 "nvme_iov_md": false 00:07:34.552 }, 00:07:34.552 "memory_domains": [ 00:07:34.552 { 00:07:34.552 "dma_device_id": "system", 00:07:34.552 "dma_device_type": 1 00:07:34.552 }, 00:07:34.552 { 00:07:34.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.552 "dma_device_type": 2 00:07:34.552 }, 00:07:34.552 { 00:07:34.552 "dma_device_id": "system", 00:07:34.552 "dma_device_type": 1 00:07:34.552 }, 00:07:34.552 { 00:07:34.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.552 "dma_device_type": 2 00:07:34.552 } 00:07:34.552 ], 00:07:34.552 "driver_specific": { 00:07:34.552 "raid": { 00:07:34.552 "uuid": "d160d9e3-3a51-4407-b666-355506a45a2f", 00:07:34.552 "strip_size_kb": 64, 00:07:34.552 "state": "online", 00:07:34.552 "raid_level": "concat", 00:07:34.552 "superblock": true, 00:07:34.552 "num_base_bdevs": 2, 00:07:34.552 "num_base_bdevs_discovered": 2, 00:07:34.552 "num_base_bdevs_operational": 2, 00:07:34.552 "base_bdevs_list": [ 00:07:34.552 { 00:07:34.552 "name": "pt1", 00:07:34.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.552 "is_configured": true, 00:07:34.552 "data_offset": 2048, 00:07:34.552 "data_size": 63488 00:07:34.552 }, 00:07:34.552 { 00:07:34.552 "name": "pt2", 00:07:34.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.552 "is_configured": true, 00:07:34.552 "data_offset": 2048, 00:07:34.552 "data_size": 63488 00:07:34.552 } 00:07:34.552 ] 00:07:34.552 } 00:07:34.552 } 00:07:34.552 }' 00:07:34.552 12:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:34.552 pt2' 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.552 [2024-11-26 12:50:52.189624] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d160d9e3-3a51-4407-b666-355506a45a2f '!=' d160d9e3-3a51-4407-b666-355506a45a2f ']' 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.552 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73757 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73757 ']' 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73757 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73757 00:07:34.813 killing process with pid 73757 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73757' 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73757 00:07:34.813 [2024-11-26 12:50:52.271899] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.813 [2024-11-26 12:50:52.271970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.813 [2024-11-26 12:50:52.272016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.813 [2024-11-26 12:50:52.272024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:34.813 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73757 00:07:34.813 [2024-11-26 12:50:52.294316] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.073 12:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:35.073 00:07:35.073 real 0m3.357s 00:07:35.073 user 0m5.125s 00:07:35.073 sys 0m0.742s 00:07:35.073 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.073 ************************************ 00:07:35.073 END TEST raid_superblock_test 00:07:35.073 ************************************ 00:07:35.073 12:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.073 12:50:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:35.073 12:50:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:35.073 12:50:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.073 12:50:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.073 ************************************ 00:07:35.073 START TEST raid_read_error_test 00:07:35.073 ************************************ 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:35.073 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Q89xYIz4jt 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73957 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73957 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73957 ']' 00:07:35.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.074 12:50:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.074 [2024-11-26 12:50:52.708894] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:35.074 [2024-11-26 12:50:52.709018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73957 ] 00:07:35.335 [2024-11-26 12:50:52.848506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.335 [2024-11-26 12:50:52.892700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.335 [2024-11-26 12:50:52.934108] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.335 [2024-11-26 12:50:52.934153] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.904 BaseBdev1_malloc 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.904 true 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.904 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.904 [2024-11-26 12:50:53.563649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.904 [2024-11-26 12:50:53.563711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.904 [2024-11-26 12:50:53.563735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.904 [2024-11-26 12:50:53.563747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.904 [2024-11-26 12:50:53.565826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.905 [2024-11-26 12:50:53.565867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.905 BaseBdev1 00:07:35.905 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.905 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.905 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.905 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.905 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.164 BaseBdev2_malloc 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.164 true 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.164 [2024-11-26 12:50:53.614417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:36.164 [2024-11-26 12:50:53.614469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.164 [2024-11-26 12:50:53.614487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:36.164 [2024-11-26 12:50:53.614496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.164 [2024-11-26 12:50:53.616481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.164 [2024-11-26 12:50:53.616528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:36.164 BaseBdev2 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.164 [2024-11-26 12:50:53.626442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.164 [2024-11-26 12:50:53.628276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:36.164 [2024-11-26 12:50:53.628444] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:36.164 [2024-11-26 12:50:53.628457] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.164 [2024-11-26 12:50:53.628691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:36.164 [2024-11-26 12:50:53.628805] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:36.164 [2024-11-26 12:50:53.628818] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:36.164 [2024-11-26 12:50:53.628942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.164 "name": "raid_bdev1", 00:07:36.164 "uuid": "18e60a15-3de1-4383-95de-0a6b04e3a377", 00:07:36.164 "strip_size_kb": 64, 00:07:36.164 "state": "online", 00:07:36.164 "raid_level": "concat", 00:07:36.164 "superblock": true, 00:07:36.164 "num_base_bdevs": 2, 00:07:36.164 "num_base_bdevs_discovered": 2, 00:07:36.164 "num_base_bdevs_operational": 2, 00:07:36.164 "base_bdevs_list": [ 00:07:36.164 { 00:07:36.164 "name": "BaseBdev1", 00:07:36.164 "uuid": "63993097-14b3-571c-aef4-4727f041f460", 00:07:36.164 "is_configured": true, 00:07:36.164 "data_offset": 2048, 00:07:36.164 "data_size": 63488 00:07:36.164 }, 00:07:36.164 { 00:07:36.164 "name": "BaseBdev2", 00:07:36.164 "uuid": "768b113a-b37e-5d18-8a87-b956367afcef", 00:07:36.164 "is_configured": true, 00:07:36.164 "data_offset": 2048, 00:07:36.164 "data_size": 63488 00:07:36.164 } 00:07:36.164 ] 00:07:36.164 }' 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.164 12:50:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.423 12:50:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:36.423 12:50:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:36.683 [2024-11-26 12:50:54.165833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.624 "name": "raid_bdev1", 00:07:37.624 "uuid": "18e60a15-3de1-4383-95de-0a6b04e3a377", 00:07:37.624 "strip_size_kb": 64, 00:07:37.624 "state": "online", 00:07:37.624 "raid_level": "concat", 00:07:37.624 "superblock": true, 00:07:37.624 "num_base_bdevs": 2, 00:07:37.624 "num_base_bdevs_discovered": 2, 00:07:37.624 "num_base_bdevs_operational": 2, 00:07:37.624 "base_bdevs_list": [ 00:07:37.624 { 00:07:37.624 "name": "BaseBdev1", 00:07:37.624 "uuid": "63993097-14b3-571c-aef4-4727f041f460", 00:07:37.624 "is_configured": true, 00:07:37.624 "data_offset": 2048, 00:07:37.624 "data_size": 63488 00:07:37.624 }, 00:07:37.624 { 00:07:37.624 "name": "BaseBdev2", 00:07:37.624 "uuid": "768b113a-b37e-5d18-8a87-b956367afcef", 00:07:37.624 "is_configured": true, 00:07:37.624 "data_offset": 2048, 00:07:37.624 "data_size": 63488 00:07:37.624 } 00:07:37.624 ] 00:07:37.624 }' 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.624 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.883 [2024-11-26 12:50:55.507877] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.883 [2024-11-26 12:50:55.507999] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.883 [2024-11-26 12:50:55.510403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.883 [2024-11-26 12:50:55.510497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.883 [2024-11-26 12:50:55.510550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.883 [2024-11-26 12:50:55.510589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:37.883 { 00:07:37.883 "results": [ 00:07:37.883 { 00:07:37.883 "job": "raid_bdev1", 00:07:37.883 "core_mask": "0x1", 00:07:37.883 "workload": "randrw", 00:07:37.883 "percentage": 50, 00:07:37.883 "status": "finished", 00:07:37.883 "queue_depth": 1, 00:07:37.883 "io_size": 131072, 00:07:37.883 "runtime": 1.342938, 00:07:37.883 "iops": 18285.2819713196, 00:07:37.883 "mibps": 2285.66024641495, 00:07:37.883 "io_failed": 1, 00:07:37.883 "io_timeout": 0, 00:07:37.883 "avg_latency_us": 75.65279302960246, 00:07:37.883 "min_latency_us": 24.034934497816593, 00:07:37.883 "max_latency_us": 1366.5257641921398 00:07:37.883 } 00:07:37.883 ], 00:07:37.883 "core_count": 1 00:07:37.883 } 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73957 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73957 ']' 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73957 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73957 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.883 killing process with pid 73957 00:07:37.883 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73957' 00:07:37.884 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73957 00:07:37.884 [2024-11-26 12:50:55.559391] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.884 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73957 00:07:38.143 [2024-11-26 12:50:55.574641] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.144 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:38.144 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Q89xYIz4jt 00:07:38.144 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:38.144 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:38.144 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:38.144 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.144 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.144 12:50:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:38.144 00:07:38.144 real 0m3.204s 00:07:38.144 user 0m4.028s 00:07:38.144 sys 0m0.520s 00:07:38.144 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.144 ************************************ 00:07:38.144 END TEST raid_read_error_test 00:07:38.144 ************************************ 00:07:38.144 12:50:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.416 12:50:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:38.416 12:50:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:38.416 12:50:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.416 12:50:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.416 ************************************ 00:07:38.416 START TEST raid_write_error_test 00:07:38.416 ************************************ 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LLgR9bzGFP 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74091 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74091 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74091 ']' 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.416 12:50:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.416 [2024-11-26 12:50:55.984844] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.416 [2024-11-26 12:50:55.984987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74091 ] 00:07:38.693 [2024-11-26 12:50:56.124334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.693 [2024-11-26 12:50:56.167685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.693 [2024-11-26 12:50:56.208693] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.693 [2024-11-26 12:50:56.208730] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.263 BaseBdev1_malloc 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.263 true 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.263 [2024-11-26 12:50:56.846044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.263 [2024-11-26 12:50:56.846102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.263 [2024-11-26 12:50:56.846124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.263 [2024-11-26 12:50:56.846134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.263 [2024-11-26 12:50:56.848211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.263 [2024-11-26 12:50:56.848319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.263 BaseBdev1 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.263 BaseBdev2_malloc 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.263 true 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.263 [2024-11-26 12:50:56.901503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.263 [2024-11-26 12:50:56.901628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.263 [2024-11-26 12:50:56.901701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.263 [2024-11-26 12:50:56.901754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.263 [2024-11-26 12:50:56.904641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.263 [2024-11-26 12:50:56.904737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.263 BaseBdev2 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.263 [2024-11-26 12:50:56.913597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.263 [2024-11-26 12:50:56.915487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.263 [2024-11-26 12:50:56.915702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:39.263 [2024-11-26 12:50:56.915753] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.263 [2024-11-26 12:50:56.916028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:39.263 [2024-11-26 12:50:56.916208] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:39.263 [2024-11-26 12:50:56.916226] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:39.263 [2024-11-26 12:50:56.916354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.263 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.523 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.523 "name": "raid_bdev1", 00:07:39.523 "uuid": "0ebc1be7-7e02-409e-8656-ea7123ce0f3e", 00:07:39.523 "strip_size_kb": 64, 00:07:39.523 "state": "online", 00:07:39.523 "raid_level": "concat", 00:07:39.523 "superblock": true, 00:07:39.523 "num_base_bdevs": 2, 00:07:39.523 "num_base_bdevs_discovered": 2, 00:07:39.523 "num_base_bdevs_operational": 2, 00:07:39.523 "base_bdevs_list": [ 00:07:39.523 { 00:07:39.523 "name": "BaseBdev1", 00:07:39.523 "uuid": "7e32c713-a42f-5aa8-b7d9-ed2429e6f267", 00:07:39.523 "is_configured": true, 00:07:39.523 "data_offset": 2048, 00:07:39.523 "data_size": 63488 00:07:39.523 }, 00:07:39.523 { 00:07:39.523 "name": "BaseBdev2", 00:07:39.523 "uuid": "ad52d1ae-f449-5563-bc1f-b1d0c42a024a", 00:07:39.523 "is_configured": true, 00:07:39.523 "data_offset": 2048, 00:07:39.523 "data_size": 63488 00:07:39.523 } 00:07:39.523 ] 00:07:39.523 }' 00:07:39.523 12:50:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.523 12:50:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.783 12:50:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:39.783 12:50:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:39.783 [2024-11-26 12:50:57.420978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.719 "name": "raid_bdev1", 00:07:40.719 "uuid": "0ebc1be7-7e02-409e-8656-ea7123ce0f3e", 00:07:40.719 "strip_size_kb": 64, 00:07:40.719 "state": "online", 00:07:40.719 "raid_level": "concat", 00:07:40.719 "superblock": true, 00:07:40.719 "num_base_bdevs": 2, 00:07:40.719 "num_base_bdevs_discovered": 2, 00:07:40.719 "num_base_bdevs_operational": 2, 00:07:40.719 "base_bdevs_list": [ 00:07:40.719 { 00:07:40.719 "name": "BaseBdev1", 00:07:40.719 "uuid": "7e32c713-a42f-5aa8-b7d9-ed2429e6f267", 00:07:40.719 "is_configured": true, 00:07:40.719 "data_offset": 2048, 00:07:40.719 "data_size": 63488 00:07:40.719 }, 00:07:40.719 { 00:07:40.719 "name": "BaseBdev2", 00:07:40.719 "uuid": "ad52d1ae-f449-5563-bc1f-b1d0c42a024a", 00:07:40.719 "is_configured": true, 00:07:40.719 "data_offset": 2048, 00:07:40.719 "data_size": 63488 00:07:40.719 } 00:07:40.719 ] 00:07:40.719 }' 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.719 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.288 [2024-11-26 12:50:58.760473] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.288 [2024-11-26 12:50:58.760505] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.288 [2024-11-26 12:50:58.762940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.288 [2024-11-26 12:50:58.762977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.288 [2024-11-26 12:50:58.763012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.288 [2024-11-26 12:50:58.763021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:41.288 { 00:07:41.288 "results": [ 00:07:41.288 { 00:07:41.288 "job": "raid_bdev1", 00:07:41.288 "core_mask": "0x1", 00:07:41.288 "workload": "randrw", 00:07:41.288 "percentage": 50, 00:07:41.288 "status": "finished", 00:07:41.288 "queue_depth": 1, 00:07:41.288 "io_size": 131072, 00:07:41.288 "runtime": 1.340279, 00:07:41.288 "iops": 17988.045772559297, 00:07:41.288 "mibps": 2248.505721569912, 00:07:41.288 "io_failed": 1, 00:07:41.288 "io_timeout": 0, 00:07:41.288 "avg_latency_us": 76.85721201407667, 00:07:41.288 "min_latency_us": 24.258515283842794, 00:07:41.288 "max_latency_us": 1359.3711790393013 00:07:41.288 } 00:07:41.288 ], 00:07:41.288 "core_count": 1 00:07:41.288 } 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74091 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74091 ']' 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74091 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74091 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74091' 00:07:41.288 killing process with pid 74091 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74091 00:07:41.288 [2024-11-26 12:50:58.815046] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.288 12:50:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74091 00:07:41.288 [2024-11-26 12:50:58.829685] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.548 12:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LLgR9bzGFP 00:07:41.548 12:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:41.548 12:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:41.548 12:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:41.548 ************************************ 00:07:41.548 END TEST raid_write_error_test 00:07:41.548 ************************************ 00:07:41.548 12:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:41.548 12:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.548 12:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.548 12:50:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:41.548 00:07:41.548 real 0m3.190s 00:07:41.548 user 0m4.021s 00:07:41.548 sys 0m0.509s 00:07:41.548 12:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.548 12:50:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.548 12:50:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:41.548 12:50:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:41.548 12:50:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:41.548 12:50:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.548 12:50:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.548 ************************************ 00:07:41.548 START TEST raid_state_function_test 00:07:41.548 ************************************ 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.548 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74219 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74219' 00:07:41.549 Process raid pid: 74219 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74219 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74219 ']' 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.549 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.808 [2024-11-26 12:50:59.245113] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:41.808 [2024-11-26 12:50:59.245348] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.808 [2024-11-26 12:50:59.406102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.808 [2024-11-26 12:50:59.450324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.067 [2024-11-26 12:50:59.491996] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.067 [2024-11-26 12:50:59.492051] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.637 [2024-11-26 12:51:00.073952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.637 [2024-11-26 12:51:00.074001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.637 [2024-11-26 12:51:00.074021] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.637 [2024-11-26 12:51:00.074032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.637 "name": "Existed_Raid", 00:07:42.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.637 "strip_size_kb": 0, 00:07:42.637 "state": "configuring", 00:07:42.637 "raid_level": "raid1", 00:07:42.637 "superblock": false, 00:07:42.637 "num_base_bdevs": 2, 00:07:42.637 "num_base_bdevs_discovered": 0, 00:07:42.637 "num_base_bdevs_operational": 2, 00:07:42.637 "base_bdevs_list": [ 00:07:42.637 { 00:07:42.637 "name": "BaseBdev1", 00:07:42.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.637 "is_configured": false, 00:07:42.637 "data_offset": 0, 00:07:42.637 "data_size": 0 00:07:42.637 }, 00:07:42.637 { 00:07:42.637 "name": "BaseBdev2", 00:07:42.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.637 "is_configured": false, 00:07:42.637 "data_offset": 0, 00:07:42.637 "data_size": 0 00:07:42.637 } 00:07:42.637 ] 00:07:42.637 }' 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.637 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.898 [2024-11-26 12:51:00.509129] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.898 [2024-11-26 12:51:00.509258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.898 [2024-11-26 12:51:00.521138] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.898 [2024-11-26 12:51:00.521247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.898 [2024-11-26 12:51:00.521274] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.898 [2024-11-26 12:51:00.521299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.898 [2024-11-26 12:51:00.541723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.898 BaseBdev1 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.898 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.898 [ 00:07:42.898 { 00:07:42.898 "name": "BaseBdev1", 00:07:42.898 "aliases": [ 00:07:42.898 "3d4a77ef-5a73-4c0c-9bc0-21ee893d3c05" 00:07:42.898 ], 00:07:42.898 "product_name": "Malloc disk", 00:07:42.898 "block_size": 512, 00:07:42.898 "num_blocks": 65536, 00:07:42.898 "uuid": "3d4a77ef-5a73-4c0c-9bc0-21ee893d3c05", 00:07:42.898 "assigned_rate_limits": { 00:07:42.898 "rw_ios_per_sec": 0, 00:07:42.898 "rw_mbytes_per_sec": 0, 00:07:42.898 "r_mbytes_per_sec": 0, 00:07:42.898 "w_mbytes_per_sec": 0 00:07:42.898 }, 00:07:42.898 "claimed": true, 00:07:42.898 "claim_type": "exclusive_write", 00:07:42.898 "zoned": false, 00:07:42.898 "supported_io_types": { 00:07:42.898 "read": true, 00:07:42.898 "write": true, 00:07:42.898 "unmap": true, 00:07:42.898 "flush": true, 00:07:42.898 "reset": true, 00:07:42.898 "nvme_admin": false, 00:07:42.898 "nvme_io": false, 00:07:42.898 "nvme_io_md": false, 00:07:42.898 "write_zeroes": true, 00:07:42.898 "zcopy": true, 00:07:42.898 "get_zone_info": false, 00:07:42.898 "zone_management": false, 00:07:42.898 "zone_append": false, 00:07:42.898 "compare": false, 00:07:42.898 "compare_and_write": false, 00:07:42.898 "abort": true, 00:07:43.158 "seek_hole": false, 00:07:43.158 "seek_data": false, 00:07:43.158 "copy": true, 00:07:43.158 "nvme_iov_md": false 00:07:43.158 }, 00:07:43.158 "memory_domains": [ 00:07:43.158 { 00:07:43.158 "dma_device_id": "system", 00:07:43.158 "dma_device_type": 1 00:07:43.158 }, 00:07:43.158 { 00:07:43.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.158 "dma_device_type": 2 00:07:43.158 } 00:07:43.158 ], 00:07:43.158 "driver_specific": {} 00:07:43.158 } 00:07:43.158 ] 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.158 "name": "Existed_Raid", 00:07:43.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.158 "strip_size_kb": 0, 00:07:43.158 "state": "configuring", 00:07:43.158 "raid_level": "raid1", 00:07:43.158 "superblock": false, 00:07:43.158 "num_base_bdevs": 2, 00:07:43.158 "num_base_bdevs_discovered": 1, 00:07:43.158 "num_base_bdevs_operational": 2, 00:07:43.158 "base_bdevs_list": [ 00:07:43.158 { 00:07:43.158 "name": "BaseBdev1", 00:07:43.158 "uuid": "3d4a77ef-5a73-4c0c-9bc0-21ee893d3c05", 00:07:43.158 "is_configured": true, 00:07:43.158 "data_offset": 0, 00:07:43.158 "data_size": 65536 00:07:43.158 }, 00:07:43.158 { 00:07:43.158 "name": "BaseBdev2", 00:07:43.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.158 "is_configured": false, 00:07:43.158 "data_offset": 0, 00:07:43.158 "data_size": 0 00:07:43.158 } 00:07:43.158 ] 00:07:43.158 }' 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.158 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.418 [2024-11-26 12:51:01.032897] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.418 [2024-11-26 12:51:01.032937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.418 [2024-11-26 12:51:01.044909] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.418 [2024-11-26 12:51:01.046731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.418 [2024-11-26 12:51:01.046821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.418 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.678 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.678 "name": "Existed_Raid", 00:07:43.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.678 "strip_size_kb": 0, 00:07:43.678 "state": "configuring", 00:07:43.678 "raid_level": "raid1", 00:07:43.678 "superblock": false, 00:07:43.678 "num_base_bdevs": 2, 00:07:43.678 "num_base_bdevs_discovered": 1, 00:07:43.678 "num_base_bdevs_operational": 2, 00:07:43.678 "base_bdevs_list": [ 00:07:43.678 { 00:07:43.678 "name": "BaseBdev1", 00:07:43.678 "uuid": "3d4a77ef-5a73-4c0c-9bc0-21ee893d3c05", 00:07:43.678 "is_configured": true, 00:07:43.678 "data_offset": 0, 00:07:43.678 "data_size": 65536 00:07:43.678 }, 00:07:43.678 { 00:07:43.678 "name": "BaseBdev2", 00:07:43.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.678 "is_configured": false, 00:07:43.678 "data_offset": 0, 00:07:43.678 "data_size": 0 00:07:43.678 } 00:07:43.678 ] 00:07:43.678 }' 00:07:43.678 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.678 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.938 [2024-11-26 12:51:01.486946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.938 [2024-11-26 12:51:01.486996] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:43.938 [2024-11-26 12:51:01.487007] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:43.938 [2024-11-26 12:51:01.487346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:43.938 [2024-11-26 12:51:01.487537] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:43.938 [2024-11-26 12:51:01.487557] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:43.938 [2024-11-26 12:51:01.487801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.938 BaseBdev2 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.938 [ 00:07:43.938 { 00:07:43.938 "name": "BaseBdev2", 00:07:43.938 "aliases": [ 00:07:43.938 "ff216140-9f77-48eb-a6fb-5583b0f1e2e0" 00:07:43.938 ], 00:07:43.938 "product_name": "Malloc disk", 00:07:43.938 "block_size": 512, 00:07:43.938 "num_blocks": 65536, 00:07:43.938 "uuid": "ff216140-9f77-48eb-a6fb-5583b0f1e2e0", 00:07:43.938 "assigned_rate_limits": { 00:07:43.938 "rw_ios_per_sec": 0, 00:07:43.938 "rw_mbytes_per_sec": 0, 00:07:43.938 "r_mbytes_per_sec": 0, 00:07:43.938 "w_mbytes_per_sec": 0 00:07:43.938 }, 00:07:43.938 "claimed": true, 00:07:43.938 "claim_type": "exclusive_write", 00:07:43.938 "zoned": false, 00:07:43.938 "supported_io_types": { 00:07:43.938 "read": true, 00:07:43.938 "write": true, 00:07:43.938 "unmap": true, 00:07:43.938 "flush": true, 00:07:43.938 "reset": true, 00:07:43.938 "nvme_admin": false, 00:07:43.938 "nvme_io": false, 00:07:43.938 "nvme_io_md": false, 00:07:43.938 "write_zeroes": true, 00:07:43.938 "zcopy": true, 00:07:43.938 "get_zone_info": false, 00:07:43.938 "zone_management": false, 00:07:43.938 "zone_append": false, 00:07:43.938 "compare": false, 00:07:43.938 "compare_and_write": false, 00:07:43.938 "abort": true, 00:07:43.938 "seek_hole": false, 00:07:43.938 "seek_data": false, 00:07:43.938 "copy": true, 00:07:43.938 "nvme_iov_md": false 00:07:43.938 }, 00:07:43.938 "memory_domains": [ 00:07:43.938 { 00:07:43.938 "dma_device_id": "system", 00:07:43.938 "dma_device_type": 1 00:07:43.938 }, 00:07:43.938 { 00:07:43.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.938 "dma_device_type": 2 00:07:43.938 } 00:07:43.938 ], 00:07:43.938 "driver_specific": {} 00:07:43.938 } 00:07:43.938 ] 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.938 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.939 "name": "Existed_Raid", 00:07:43.939 "uuid": "07839c55-b1db-4198-b59b-8b2203c63731", 00:07:43.939 "strip_size_kb": 0, 00:07:43.939 "state": "online", 00:07:43.939 "raid_level": "raid1", 00:07:43.939 "superblock": false, 00:07:43.939 "num_base_bdevs": 2, 00:07:43.939 "num_base_bdevs_discovered": 2, 00:07:43.939 "num_base_bdevs_operational": 2, 00:07:43.939 "base_bdevs_list": [ 00:07:43.939 { 00:07:43.939 "name": "BaseBdev1", 00:07:43.939 "uuid": "3d4a77ef-5a73-4c0c-9bc0-21ee893d3c05", 00:07:43.939 "is_configured": true, 00:07:43.939 "data_offset": 0, 00:07:43.939 "data_size": 65536 00:07:43.939 }, 00:07:43.939 { 00:07:43.939 "name": "BaseBdev2", 00:07:43.939 "uuid": "ff216140-9f77-48eb-a6fb-5583b0f1e2e0", 00:07:43.939 "is_configured": true, 00:07:43.939 "data_offset": 0, 00:07:43.939 "data_size": 65536 00:07:43.939 } 00:07:43.939 ] 00:07:43.939 }' 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.939 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.508 [2024-11-26 12:51:01.970434] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.508 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.508 "name": "Existed_Raid", 00:07:44.508 "aliases": [ 00:07:44.508 "07839c55-b1db-4198-b59b-8b2203c63731" 00:07:44.508 ], 00:07:44.508 "product_name": "Raid Volume", 00:07:44.508 "block_size": 512, 00:07:44.508 "num_blocks": 65536, 00:07:44.508 "uuid": "07839c55-b1db-4198-b59b-8b2203c63731", 00:07:44.508 "assigned_rate_limits": { 00:07:44.508 "rw_ios_per_sec": 0, 00:07:44.508 "rw_mbytes_per_sec": 0, 00:07:44.508 "r_mbytes_per_sec": 0, 00:07:44.508 "w_mbytes_per_sec": 0 00:07:44.508 }, 00:07:44.508 "claimed": false, 00:07:44.508 "zoned": false, 00:07:44.508 "supported_io_types": { 00:07:44.508 "read": true, 00:07:44.508 "write": true, 00:07:44.508 "unmap": false, 00:07:44.508 "flush": false, 00:07:44.508 "reset": true, 00:07:44.508 "nvme_admin": false, 00:07:44.508 "nvme_io": false, 00:07:44.508 "nvme_io_md": false, 00:07:44.508 "write_zeroes": true, 00:07:44.508 "zcopy": false, 00:07:44.508 "get_zone_info": false, 00:07:44.508 "zone_management": false, 00:07:44.508 "zone_append": false, 00:07:44.508 "compare": false, 00:07:44.508 "compare_and_write": false, 00:07:44.508 "abort": false, 00:07:44.508 "seek_hole": false, 00:07:44.508 "seek_data": false, 00:07:44.508 "copy": false, 00:07:44.508 "nvme_iov_md": false 00:07:44.508 }, 00:07:44.508 "memory_domains": [ 00:07:44.508 { 00:07:44.508 "dma_device_id": "system", 00:07:44.508 "dma_device_type": 1 00:07:44.508 }, 00:07:44.508 { 00:07:44.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.508 "dma_device_type": 2 00:07:44.508 }, 00:07:44.508 { 00:07:44.508 "dma_device_id": "system", 00:07:44.508 "dma_device_type": 1 00:07:44.508 }, 00:07:44.508 { 00:07:44.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.508 "dma_device_type": 2 00:07:44.508 } 00:07:44.508 ], 00:07:44.508 "driver_specific": { 00:07:44.508 "raid": { 00:07:44.508 "uuid": "07839c55-b1db-4198-b59b-8b2203c63731", 00:07:44.508 "strip_size_kb": 0, 00:07:44.508 "state": "online", 00:07:44.508 "raid_level": "raid1", 00:07:44.508 "superblock": false, 00:07:44.508 "num_base_bdevs": 2, 00:07:44.508 "num_base_bdevs_discovered": 2, 00:07:44.508 "num_base_bdevs_operational": 2, 00:07:44.508 "base_bdevs_list": [ 00:07:44.508 { 00:07:44.508 "name": "BaseBdev1", 00:07:44.508 "uuid": "3d4a77ef-5a73-4c0c-9bc0-21ee893d3c05", 00:07:44.508 "is_configured": true, 00:07:44.508 "data_offset": 0, 00:07:44.508 "data_size": 65536 00:07:44.508 }, 00:07:44.508 { 00:07:44.508 "name": "BaseBdev2", 00:07:44.508 "uuid": "ff216140-9f77-48eb-a6fb-5583b0f1e2e0", 00:07:44.508 "is_configured": true, 00:07:44.508 "data_offset": 0, 00:07:44.508 "data_size": 65536 00:07:44.508 } 00:07:44.508 ] 00:07:44.508 } 00:07:44.508 } 00:07:44.508 }' 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:44.508 BaseBdev2' 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.508 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.509 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.768 [2024-11-26 12:51:02.189809] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.768 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.768 "name": "Existed_Raid", 00:07:44.768 "uuid": "07839c55-b1db-4198-b59b-8b2203c63731", 00:07:44.768 "strip_size_kb": 0, 00:07:44.768 "state": "online", 00:07:44.768 "raid_level": "raid1", 00:07:44.768 "superblock": false, 00:07:44.768 "num_base_bdevs": 2, 00:07:44.768 "num_base_bdevs_discovered": 1, 00:07:44.768 "num_base_bdevs_operational": 1, 00:07:44.768 "base_bdevs_list": [ 00:07:44.768 { 00:07:44.768 "name": null, 00:07:44.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.768 "is_configured": false, 00:07:44.768 "data_offset": 0, 00:07:44.768 "data_size": 65536 00:07:44.768 }, 00:07:44.768 { 00:07:44.768 "name": "BaseBdev2", 00:07:44.768 "uuid": "ff216140-9f77-48eb-a6fb-5583b0f1e2e0", 00:07:44.768 "is_configured": true, 00:07:44.768 "data_offset": 0, 00:07:44.768 "data_size": 65536 00:07:44.768 } 00:07:44.768 ] 00:07:44.768 }' 00:07:44.769 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.769 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.029 [2024-11-26 12:51:02.660412] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:45.029 [2024-11-26 12:51:02.660544] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.029 [2024-11-26 12:51:02.671944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.029 [2024-11-26 12:51:02.671996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.029 [2024-11-26 12:51:02.672007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.029 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74219 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74219 ']' 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74219 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74219 00:07:45.289 killing process with pid 74219 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74219' 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74219 00:07:45.289 [2024-11-26 12:51:02.744473] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.289 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74219 00:07:45.289 [2024-11-26 12:51:02.745454] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.550 12:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:45.550 00:07:45.550 real 0m3.840s 00:07:45.550 user 0m6.040s 00:07:45.550 sys 0m0.752s 00:07:45.550 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.550 12:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.550 ************************************ 00:07:45.550 END TEST raid_state_function_test 00:07:45.550 ************************************ 00:07:45.550 12:51:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:45.550 12:51:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.550 12:51:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.550 12:51:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.550 ************************************ 00:07:45.550 START TEST raid_state_function_test_sb 00:07:45.550 ************************************ 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74450 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.550 Process raid pid: 74450 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74450' 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74450 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74450 ']' 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.550 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.551 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.551 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.551 [2024-11-26 12:51:03.165332] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.551 [2024-11-26 12:51:03.165540] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.811 [2024-11-26 12:51:03.324921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.811 [2024-11-26 12:51:03.369593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.811 [2024-11-26 12:51:03.410805] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.811 [2024-11-26 12:51:03.410841] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.381 [2024-11-26 12:51:03.987559] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.381 [2024-11-26 12:51:03.987681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.381 [2024-11-26 12:51:03.987712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.381 [2024-11-26 12:51:03.987738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.381 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.381 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.381 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.381 "name": "Existed_Raid", 00:07:46.381 "uuid": "9b1e1d71-a7ec-4c98-b52f-26ead92161b7", 00:07:46.381 "strip_size_kb": 0, 00:07:46.381 "state": "configuring", 00:07:46.381 "raid_level": "raid1", 00:07:46.381 "superblock": true, 00:07:46.381 "num_base_bdevs": 2, 00:07:46.381 "num_base_bdevs_discovered": 0, 00:07:46.381 "num_base_bdevs_operational": 2, 00:07:46.381 "base_bdevs_list": [ 00:07:46.381 { 00:07:46.381 "name": "BaseBdev1", 00:07:46.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.381 "is_configured": false, 00:07:46.381 "data_offset": 0, 00:07:46.381 "data_size": 0 00:07:46.381 }, 00:07:46.381 { 00:07:46.381 "name": "BaseBdev2", 00:07:46.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.381 "is_configured": false, 00:07:46.381 "data_offset": 0, 00:07:46.381 "data_size": 0 00:07:46.381 } 00:07:46.381 ] 00:07:46.381 }' 00:07:46.381 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.381 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.951 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 [2024-11-26 12:51:04.414784] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.952 [2024-11-26 12:51:04.414849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 [2024-11-26 12:51:04.426795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.952 [2024-11-26 12:51:04.426893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.952 [2024-11-26 12:51:04.426920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.952 [2024-11-26 12:51:04.426943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 [2024-11-26 12:51:04.447391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.952 BaseBdev1 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 [ 00:07:46.952 { 00:07:46.952 "name": "BaseBdev1", 00:07:46.952 "aliases": [ 00:07:46.952 "c58a8fa5-0ee5-42f4-8274-779f1f65ce06" 00:07:46.952 ], 00:07:46.952 "product_name": "Malloc disk", 00:07:46.952 "block_size": 512, 00:07:46.952 "num_blocks": 65536, 00:07:46.952 "uuid": "c58a8fa5-0ee5-42f4-8274-779f1f65ce06", 00:07:46.952 "assigned_rate_limits": { 00:07:46.952 "rw_ios_per_sec": 0, 00:07:46.952 "rw_mbytes_per_sec": 0, 00:07:46.952 "r_mbytes_per_sec": 0, 00:07:46.952 "w_mbytes_per_sec": 0 00:07:46.952 }, 00:07:46.952 "claimed": true, 00:07:46.952 "claim_type": "exclusive_write", 00:07:46.952 "zoned": false, 00:07:46.952 "supported_io_types": { 00:07:46.952 "read": true, 00:07:46.952 "write": true, 00:07:46.952 "unmap": true, 00:07:46.952 "flush": true, 00:07:46.952 "reset": true, 00:07:46.952 "nvme_admin": false, 00:07:46.952 "nvme_io": false, 00:07:46.952 "nvme_io_md": false, 00:07:46.952 "write_zeroes": true, 00:07:46.952 "zcopy": true, 00:07:46.952 "get_zone_info": false, 00:07:46.952 "zone_management": false, 00:07:46.952 "zone_append": false, 00:07:46.952 "compare": false, 00:07:46.952 "compare_and_write": false, 00:07:46.952 "abort": true, 00:07:46.952 "seek_hole": false, 00:07:46.952 "seek_data": false, 00:07:46.952 "copy": true, 00:07:46.952 "nvme_iov_md": false 00:07:46.952 }, 00:07:46.952 "memory_domains": [ 00:07:46.952 { 00:07:46.952 "dma_device_id": "system", 00:07:46.952 "dma_device_type": 1 00:07:46.952 }, 00:07:46.952 { 00:07:46.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.952 "dma_device_type": 2 00:07:46.952 } 00:07:46.952 ], 00:07:46.952 "driver_specific": {} 00:07:46.952 } 00:07:46.952 ] 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.952 "name": "Existed_Raid", 00:07:46.952 "uuid": "449ec034-75af-4856-81ce-92b572ad4d02", 00:07:46.952 "strip_size_kb": 0, 00:07:46.952 "state": "configuring", 00:07:46.952 "raid_level": "raid1", 00:07:46.952 "superblock": true, 00:07:46.952 "num_base_bdevs": 2, 00:07:46.952 "num_base_bdevs_discovered": 1, 00:07:46.952 "num_base_bdevs_operational": 2, 00:07:46.952 "base_bdevs_list": [ 00:07:46.952 { 00:07:46.952 "name": "BaseBdev1", 00:07:46.952 "uuid": "c58a8fa5-0ee5-42f4-8274-779f1f65ce06", 00:07:46.952 "is_configured": true, 00:07:46.952 "data_offset": 2048, 00:07:46.952 "data_size": 63488 00:07:46.952 }, 00:07:46.952 { 00:07:46.952 "name": "BaseBdev2", 00:07:46.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.952 "is_configured": false, 00:07:46.952 "data_offset": 0, 00:07:46.952 "data_size": 0 00:07:46.952 } 00:07:46.952 ] 00:07:46.952 }' 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.952 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.212 [2024-11-26 12:51:04.870727] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.212 [2024-11-26 12:51:04.870814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.212 [2024-11-26 12:51:04.882744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.212 [2024-11-26 12:51:04.884574] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.212 [2024-11-26 12:51:04.884655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:47.212 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.472 "name": "Existed_Raid", 00:07:47.472 "uuid": "4ddcb559-8b66-44b5-888c-a9eefabc43ed", 00:07:47.472 "strip_size_kb": 0, 00:07:47.472 "state": "configuring", 00:07:47.472 "raid_level": "raid1", 00:07:47.472 "superblock": true, 00:07:47.472 "num_base_bdevs": 2, 00:07:47.472 "num_base_bdevs_discovered": 1, 00:07:47.472 "num_base_bdevs_operational": 2, 00:07:47.472 "base_bdevs_list": [ 00:07:47.472 { 00:07:47.472 "name": "BaseBdev1", 00:07:47.472 "uuid": "c58a8fa5-0ee5-42f4-8274-779f1f65ce06", 00:07:47.472 "is_configured": true, 00:07:47.472 "data_offset": 2048, 00:07:47.472 "data_size": 63488 00:07:47.472 }, 00:07:47.472 { 00:07:47.472 "name": "BaseBdev2", 00:07:47.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.472 "is_configured": false, 00:07:47.472 "data_offset": 0, 00:07:47.472 "data_size": 0 00:07:47.472 } 00:07:47.472 ] 00:07:47.472 }' 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.472 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.731 [2024-11-26 12:51:05.299255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.731 [2024-11-26 12:51:05.299916] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:47.731 [2024-11-26 12:51:05.300096] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:47.731 BaseBdev2 00:07:47.731 [2024-11-26 12:51:05.301069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.731 [2024-11-26 12:51:05.301488] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:47.731 [2024-11-26 12:51:05.301551] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:47.731 [2024-11-26 12:51:05.301936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.731 [ 00:07:47.731 { 00:07:47.731 "name": "BaseBdev2", 00:07:47.731 "aliases": [ 00:07:47.731 "5efbef60-a4cf-482b-b65a-99035ef913e8" 00:07:47.731 ], 00:07:47.731 "product_name": "Malloc disk", 00:07:47.731 "block_size": 512, 00:07:47.731 "num_blocks": 65536, 00:07:47.731 "uuid": "5efbef60-a4cf-482b-b65a-99035ef913e8", 00:07:47.731 "assigned_rate_limits": { 00:07:47.731 "rw_ios_per_sec": 0, 00:07:47.731 "rw_mbytes_per_sec": 0, 00:07:47.731 "r_mbytes_per_sec": 0, 00:07:47.731 "w_mbytes_per_sec": 0 00:07:47.731 }, 00:07:47.731 "claimed": true, 00:07:47.731 "claim_type": "exclusive_write", 00:07:47.731 "zoned": false, 00:07:47.731 "supported_io_types": { 00:07:47.731 "read": true, 00:07:47.731 "write": true, 00:07:47.731 "unmap": true, 00:07:47.731 "flush": true, 00:07:47.731 "reset": true, 00:07:47.731 "nvme_admin": false, 00:07:47.731 "nvme_io": false, 00:07:47.731 "nvme_io_md": false, 00:07:47.731 "write_zeroes": true, 00:07:47.731 "zcopy": true, 00:07:47.731 "get_zone_info": false, 00:07:47.731 "zone_management": false, 00:07:47.731 "zone_append": false, 00:07:47.731 "compare": false, 00:07:47.731 "compare_and_write": false, 00:07:47.731 "abort": true, 00:07:47.731 "seek_hole": false, 00:07:47.731 "seek_data": false, 00:07:47.731 "copy": true, 00:07:47.731 "nvme_iov_md": false 00:07:47.731 }, 00:07:47.731 "memory_domains": [ 00:07:47.731 { 00:07:47.731 "dma_device_id": "system", 00:07:47.731 "dma_device_type": 1 00:07:47.731 }, 00:07:47.731 { 00:07:47.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.731 "dma_device_type": 2 00:07:47.731 } 00:07:47.731 ], 00:07:47.731 "driver_specific": {} 00:07:47.731 } 00:07:47.731 ] 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.731 "name": "Existed_Raid", 00:07:47.731 "uuid": "4ddcb559-8b66-44b5-888c-a9eefabc43ed", 00:07:47.731 "strip_size_kb": 0, 00:07:47.731 "state": "online", 00:07:47.731 "raid_level": "raid1", 00:07:47.731 "superblock": true, 00:07:47.731 "num_base_bdevs": 2, 00:07:47.731 "num_base_bdevs_discovered": 2, 00:07:47.731 "num_base_bdevs_operational": 2, 00:07:47.731 "base_bdevs_list": [ 00:07:47.731 { 00:07:47.731 "name": "BaseBdev1", 00:07:47.731 "uuid": "c58a8fa5-0ee5-42f4-8274-779f1f65ce06", 00:07:47.731 "is_configured": true, 00:07:47.731 "data_offset": 2048, 00:07:47.731 "data_size": 63488 00:07:47.731 }, 00:07:47.731 { 00:07:47.731 "name": "BaseBdev2", 00:07:47.731 "uuid": "5efbef60-a4cf-482b-b65a-99035ef913e8", 00:07:47.731 "is_configured": true, 00:07:47.731 "data_offset": 2048, 00:07:47.731 "data_size": 63488 00:07:47.731 } 00:07:47.731 ] 00:07:47.731 }' 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.731 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.299 [2024-11-26 12:51:05.766587] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.299 "name": "Existed_Raid", 00:07:48.299 "aliases": [ 00:07:48.299 "4ddcb559-8b66-44b5-888c-a9eefabc43ed" 00:07:48.299 ], 00:07:48.299 "product_name": "Raid Volume", 00:07:48.299 "block_size": 512, 00:07:48.299 "num_blocks": 63488, 00:07:48.299 "uuid": "4ddcb559-8b66-44b5-888c-a9eefabc43ed", 00:07:48.299 "assigned_rate_limits": { 00:07:48.299 "rw_ios_per_sec": 0, 00:07:48.299 "rw_mbytes_per_sec": 0, 00:07:48.299 "r_mbytes_per_sec": 0, 00:07:48.299 "w_mbytes_per_sec": 0 00:07:48.299 }, 00:07:48.299 "claimed": false, 00:07:48.299 "zoned": false, 00:07:48.299 "supported_io_types": { 00:07:48.299 "read": true, 00:07:48.299 "write": true, 00:07:48.299 "unmap": false, 00:07:48.299 "flush": false, 00:07:48.299 "reset": true, 00:07:48.299 "nvme_admin": false, 00:07:48.299 "nvme_io": false, 00:07:48.299 "nvme_io_md": false, 00:07:48.299 "write_zeroes": true, 00:07:48.299 "zcopy": false, 00:07:48.299 "get_zone_info": false, 00:07:48.299 "zone_management": false, 00:07:48.299 "zone_append": false, 00:07:48.299 "compare": false, 00:07:48.299 "compare_and_write": false, 00:07:48.299 "abort": false, 00:07:48.299 "seek_hole": false, 00:07:48.299 "seek_data": false, 00:07:48.299 "copy": false, 00:07:48.299 "nvme_iov_md": false 00:07:48.299 }, 00:07:48.299 "memory_domains": [ 00:07:48.299 { 00:07:48.299 "dma_device_id": "system", 00:07:48.299 "dma_device_type": 1 00:07:48.299 }, 00:07:48.299 { 00:07:48.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.299 "dma_device_type": 2 00:07:48.299 }, 00:07:48.299 { 00:07:48.299 "dma_device_id": "system", 00:07:48.299 "dma_device_type": 1 00:07:48.299 }, 00:07:48.299 { 00:07:48.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.299 "dma_device_type": 2 00:07:48.299 } 00:07:48.299 ], 00:07:48.299 "driver_specific": { 00:07:48.299 "raid": { 00:07:48.299 "uuid": "4ddcb559-8b66-44b5-888c-a9eefabc43ed", 00:07:48.299 "strip_size_kb": 0, 00:07:48.299 "state": "online", 00:07:48.299 "raid_level": "raid1", 00:07:48.299 "superblock": true, 00:07:48.299 "num_base_bdevs": 2, 00:07:48.299 "num_base_bdevs_discovered": 2, 00:07:48.299 "num_base_bdevs_operational": 2, 00:07:48.299 "base_bdevs_list": [ 00:07:48.299 { 00:07:48.299 "name": "BaseBdev1", 00:07:48.299 "uuid": "c58a8fa5-0ee5-42f4-8274-779f1f65ce06", 00:07:48.299 "is_configured": true, 00:07:48.299 "data_offset": 2048, 00:07:48.299 "data_size": 63488 00:07:48.299 }, 00:07:48.299 { 00:07:48.299 "name": "BaseBdev2", 00:07:48.299 "uuid": "5efbef60-a4cf-482b-b65a-99035ef913e8", 00:07:48.299 "is_configured": true, 00:07:48.299 "data_offset": 2048, 00:07:48.299 "data_size": 63488 00:07:48.299 } 00:07:48.299 ] 00:07:48.299 } 00:07:48.299 } 00:07:48.299 }' 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:48.299 BaseBdev2' 00:07:48.299 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.300 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.300 [2024-11-26 12:51:05.970083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.608 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.609 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.609 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.609 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.609 "name": "Existed_Raid", 00:07:48.609 "uuid": "4ddcb559-8b66-44b5-888c-a9eefabc43ed", 00:07:48.609 "strip_size_kb": 0, 00:07:48.609 "state": "online", 00:07:48.609 "raid_level": "raid1", 00:07:48.609 "superblock": true, 00:07:48.609 "num_base_bdevs": 2, 00:07:48.609 "num_base_bdevs_discovered": 1, 00:07:48.609 "num_base_bdevs_operational": 1, 00:07:48.609 "base_bdevs_list": [ 00:07:48.609 { 00:07:48.609 "name": null, 00:07:48.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.609 "is_configured": false, 00:07:48.609 "data_offset": 0, 00:07:48.609 "data_size": 63488 00:07:48.609 }, 00:07:48.609 { 00:07:48.609 "name": "BaseBdev2", 00:07:48.609 "uuid": "5efbef60-a4cf-482b-b65a-99035ef913e8", 00:07:48.609 "is_configured": true, 00:07:48.609 "data_offset": 2048, 00:07:48.609 "data_size": 63488 00:07:48.609 } 00:07:48.609 ] 00:07:48.609 }' 00:07:48.609 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.609 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.869 [2024-11-26 12:51:06.480572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.869 [2024-11-26 12:51:06.480684] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.869 [2024-11-26 12:51:06.492470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.869 [2024-11-26 12:51:06.492523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.869 [2024-11-26 12:51:06.492544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.869 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.870 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.870 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.870 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.870 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.870 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:48.870 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:48.870 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:48.870 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74450 00:07:48.870 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74450 ']' 00:07:48.870 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74450 00:07:49.130 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:49.130 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.130 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74450 00:07:49.130 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.130 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.130 killing process with pid 74450 00:07:49.130 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74450' 00:07:49.130 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74450 00:07:49.130 [2024-11-26 12:51:06.590109] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.130 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74450 00:07:49.130 [2024-11-26 12:51:06.591074] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.391 12:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:49.391 00:07:49.391 real 0m3.762s 00:07:49.391 user 0m5.852s 00:07:49.391 sys 0m0.784s 00:07:49.391 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.391 12:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.391 ************************************ 00:07:49.391 END TEST raid_state_function_test_sb 00:07:49.391 ************************************ 00:07:49.391 12:51:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:49.391 12:51:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:49.391 12:51:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.391 12:51:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.391 ************************************ 00:07:49.391 START TEST raid_superblock_test 00:07:49.391 ************************************ 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74691 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74691 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74691 ']' 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.391 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.391 [2024-11-26 12:51:06.989956] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.391 [2024-11-26 12:51:06.990090] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74691 ] 00:07:49.652 [2024-11-26 12:51:07.148422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.652 [2024-11-26 12:51:07.192391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.652 [2024-11-26 12:51:07.234509] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.652 [2024-11-26 12:51:07.234560] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.222 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.222 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 malloc1 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 [2024-11-26 12:51:07.824708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:50.223 [2024-11-26 12:51:07.824792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.223 [2024-11-26 12:51:07.824824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:50.223 [2024-11-26 12:51:07.824839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.223 [2024-11-26 12:51:07.826873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.223 [2024-11-26 12:51:07.826912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:50.223 pt1 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 malloc2 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 [2024-11-26 12:51:07.870143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.223 [2024-11-26 12:51:07.870274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.223 [2024-11-26 12:51:07.870311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:50.223 [2024-11-26 12:51:07.870338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.223 [2024-11-26 12:51:07.875103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.223 [2024-11-26 12:51:07.875218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.223 pt2 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 [2024-11-26 12:51:07.883483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:50.223 [2024-11-26 12:51:07.886352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.223 [2024-11-26 12:51:07.886562] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:50.223 [2024-11-26 12:51:07.886586] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:50.223 [2024-11-26 12:51:07.886976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:50.223 [2024-11-26 12:51:07.887214] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:50.223 [2024-11-26 12:51:07.887241] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:50.223 [2024-11-26 12:51:07.887491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.483 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.483 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.483 "name": "raid_bdev1", 00:07:50.483 "uuid": "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f", 00:07:50.483 "strip_size_kb": 0, 00:07:50.483 "state": "online", 00:07:50.483 "raid_level": "raid1", 00:07:50.483 "superblock": true, 00:07:50.483 "num_base_bdevs": 2, 00:07:50.483 "num_base_bdevs_discovered": 2, 00:07:50.483 "num_base_bdevs_operational": 2, 00:07:50.483 "base_bdevs_list": [ 00:07:50.483 { 00:07:50.483 "name": "pt1", 00:07:50.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.483 "is_configured": true, 00:07:50.483 "data_offset": 2048, 00:07:50.483 "data_size": 63488 00:07:50.483 }, 00:07:50.483 { 00:07:50.483 "name": "pt2", 00:07:50.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.483 "is_configured": true, 00:07:50.483 "data_offset": 2048, 00:07:50.483 "data_size": 63488 00:07:50.483 } 00:07:50.483 ] 00:07:50.483 }' 00:07:50.483 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.483 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.743 [2024-11-26 12:51:08.302964] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.743 "name": "raid_bdev1", 00:07:50.743 "aliases": [ 00:07:50.743 "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f" 00:07:50.743 ], 00:07:50.743 "product_name": "Raid Volume", 00:07:50.743 "block_size": 512, 00:07:50.743 "num_blocks": 63488, 00:07:50.743 "uuid": "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f", 00:07:50.743 "assigned_rate_limits": { 00:07:50.743 "rw_ios_per_sec": 0, 00:07:50.743 "rw_mbytes_per_sec": 0, 00:07:50.743 "r_mbytes_per_sec": 0, 00:07:50.743 "w_mbytes_per_sec": 0 00:07:50.743 }, 00:07:50.743 "claimed": false, 00:07:50.743 "zoned": false, 00:07:50.743 "supported_io_types": { 00:07:50.743 "read": true, 00:07:50.743 "write": true, 00:07:50.743 "unmap": false, 00:07:50.743 "flush": false, 00:07:50.743 "reset": true, 00:07:50.743 "nvme_admin": false, 00:07:50.743 "nvme_io": false, 00:07:50.743 "nvme_io_md": false, 00:07:50.743 "write_zeroes": true, 00:07:50.743 "zcopy": false, 00:07:50.743 "get_zone_info": false, 00:07:50.743 "zone_management": false, 00:07:50.743 "zone_append": false, 00:07:50.743 "compare": false, 00:07:50.743 "compare_and_write": false, 00:07:50.743 "abort": false, 00:07:50.743 "seek_hole": false, 00:07:50.743 "seek_data": false, 00:07:50.743 "copy": false, 00:07:50.743 "nvme_iov_md": false 00:07:50.743 }, 00:07:50.743 "memory_domains": [ 00:07:50.743 { 00:07:50.743 "dma_device_id": "system", 00:07:50.743 "dma_device_type": 1 00:07:50.743 }, 00:07:50.743 { 00:07:50.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.743 "dma_device_type": 2 00:07:50.743 }, 00:07:50.743 { 00:07:50.743 "dma_device_id": "system", 00:07:50.743 "dma_device_type": 1 00:07:50.743 }, 00:07:50.743 { 00:07:50.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.743 "dma_device_type": 2 00:07:50.743 } 00:07:50.743 ], 00:07:50.743 "driver_specific": { 00:07:50.743 "raid": { 00:07:50.743 "uuid": "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f", 00:07:50.743 "strip_size_kb": 0, 00:07:50.743 "state": "online", 00:07:50.743 "raid_level": "raid1", 00:07:50.743 "superblock": true, 00:07:50.743 "num_base_bdevs": 2, 00:07:50.743 "num_base_bdevs_discovered": 2, 00:07:50.743 "num_base_bdevs_operational": 2, 00:07:50.743 "base_bdevs_list": [ 00:07:50.743 { 00:07:50.743 "name": "pt1", 00:07:50.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.743 "is_configured": true, 00:07:50.743 "data_offset": 2048, 00:07:50.743 "data_size": 63488 00:07:50.743 }, 00:07:50.743 { 00:07:50.743 "name": "pt2", 00:07:50.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.743 "is_configured": true, 00:07:50.743 "data_offset": 2048, 00:07:50.743 "data_size": 63488 00:07:50.743 } 00:07:50.743 ] 00:07:50.743 } 00:07:50.743 } 00:07:50.743 }' 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:50.743 pt2' 00:07:50.743 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.004 [2024-11-26 12:51:08.530504] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f ']' 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.004 [2024-11-26 12:51:08.574263] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.004 [2024-11-26 12:51:08.574289] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.004 [2024-11-26 12:51:08.574358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.004 [2024-11-26 12:51:08.574423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.004 [2024-11-26 12:51:08.574432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:51.004 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.005 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.265 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:51.265 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:51.265 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:51.265 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:51.265 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.266 [2024-11-26 12:51:08.706056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:51.266 [2024-11-26 12:51:08.707931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:51.266 [2024-11-26 12:51:08.708002] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:51.266 [2024-11-26 12:51:08.708049] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:51.266 [2024-11-26 12:51:08.708065] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.266 [2024-11-26 12:51:08.708073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:51.266 request: 00:07:51.266 { 00:07:51.266 "name": "raid_bdev1", 00:07:51.266 "raid_level": "raid1", 00:07:51.266 "base_bdevs": [ 00:07:51.266 "malloc1", 00:07:51.266 "malloc2" 00:07:51.266 ], 00:07:51.266 "superblock": false, 00:07:51.266 "method": "bdev_raid_create", 00:07:51.266 "req_id": 1 00:07:51.266 } 00:07:51.266 Got JSON-RPC error response 00:07:51.266 response: 00:07:51.266 { 00:07:51.266 "code": -17, 00:07:51.266 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:51.266 } 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.266 [2024-11-26 12:51:08.769916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:51.266 [2024-11-26 12:51:08.769976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.266 [2024-11-26 12:51:08.769992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:51.266 [2024-11-26 12:51:08.770000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.266 [2024-11-26 12:51:08.772007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.266 [2024-11-26 12:51:08.772043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:51.266 [2024-11-26 12:51:08.772117] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:51.266 [2024-11-26 12:51:08.772164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:51.266 pt1 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.266 "name": "raid_bdev1", 00:07:51.266 "uuid": "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f", 00:07:51.266 "strip_size_kb": 0, 00:07:51.266 "state": "configuring", 00:07:51.266 "raid_level": "raid1", 00:07:51.266 "superblock": true, 00:07:51.266 "num_base_bdevs": 2, 00:07:51.266 "num_base_bdevs_discovered": 1, 00:07:51.266 "num_base_bdevs_operational": 2, 00:07:51.266 "base_bdevs_list": [ 00:07:51.266 { 00:07:51.266 "name": "pt1", 00:07:51.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.266 "is_configured": true, 00:07:51.266 "data_offset": 2048, 00:07:51.266 "data_size": 63488 00:07:51.266 }, 00:07:51.266 { 00:07:51.266 "name": null, 00:07:51.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.266 "is_configured": false, 00:07:51.266 "data_offset": 2048, 00:07:51.266 "data_size": 63488 00:07:51.266 } 00:07:51.266 ] 00:07:51.266 }' 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.266 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.527 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:51.527 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:51.527 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:51.527 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:51.527 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.527 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.527 [2024-11-26 12:51:09.201199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:51.527 [2024-11-26 12:51:09.201268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.527 [2024-11-26 12:51:09.201289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:51.527 [2024-11-26 12:51:09.201298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.527 [2024-11-26 12:51:09.201666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.527 [2024-11-26 12:51:09.201691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:51.527 [2024-11-26 12:51:09.201753] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:51.527 [2024-11-26 12:51:09.201772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:51.527 [2024-11-26 12:51:09.201855] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:51.527 [2024-11-26 12:51:09.201872] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:51.527 [2024-11-26 12:51:09.202095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:51.527 [2024-11-26 12:51:09.202233] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:51.527 [2024-11-26 12:51:09.202252] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:51.527 [2024-11-26 12:51:09.202351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.787 pt2 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.787 "name": "raid_bdev1", 00:07:51.787 "uuid": "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f", 00:07:51.787 "strip_size_kb": 0, 00:07:51.787 "state": "online", 00:07:51.787 "raid_level": "raid1", 00:07:51.787 "superblock": true, 00:07:51.787 "num_base_bdevs": 2, 00:07:51.787 "num_base_bdevs_discovered": 2, 00:07:51.787 "num_base_bdevs_operational": 2, 00:07:51.787 "base_bdevs_list": [ 00:07:51.787 { 00:07:51.787 "name": "pt1", 00:07:51.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.787 "is_configured": true, 00:07:51.787 "data_offset": 2048, 00:07:51.787 "data_size": 63488 00:07:51.787 }, 00:07:51.787 { 00:07:51.787 "name": "pt2", 00:07:51.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.787 "is_configured": true, 00:07:51.787 "data_offset": 2048, 00:07:51.787 "data_size": 63488 00:07:51.787 } 00:07:51.787 ] 00:07:51.787 }' 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.787 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.047 [2024-11-26 12:51:09.620701] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.047 "name": "raid_bdev1", 00:07:52.047 "aliases": [ 00:07:52.047 "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f" 00:07:52.047 ], 00:07:52.047 "product_name": "Raid Volume", 00:07:52.047 "block_size": 512, 00:07:52.047 "num_blocks": 63488, 00:07:52.047 "uuid": "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f", 00:07:52.047 "assigned_rate_limits": { 00:07:52.047 "rw_ios_per_sec": 0, 00:07:52.047 "rw_mbytes_per_sec": 0, 00:07:52.047 "r_mbytes_per_sec": 0, 00:07:52.047 "w_mbytes_per_sec": 0 00:07:52.047 }, 00:07:52.047 "claimed": false, 00:07:52.047 "zoned": false, 00:07:52.047 "supported_io_types": { 00:07:52.047 "read": true, 00:07:52.047 "write": true, 00:07:52.047 "unmap": false, 00:07:52.047 "flush": false, 00:07:52.047 "reset": true, 00:07:52.047 "nvme_admin": false, 00:07:52.047 "nvme_io": false, 00:07:52.047 "nvme_io_md": false, 00:07:52.047 "write_zeroes": true, 00:07:52.047 "zcopy": false, 00:07:52.047 "get_zone_info": false, 00:07:52.047 "zone_management": false, 00:07:52.047 "zone_append": false, 00:07:52.047 "compare": false, 00:07:52.047 "compare_and_write": false, 00:07:52.047 "abort": false, 00:07:52.047 "seek_hole": false, 00:07:52.047 "seek_data": false, 00:07:52.047 "copy": false, 00:07:52.047 "nvme_iov_md": false 00:07:52.047 }, 00:07:52.047 "memory_domains": [ 00:07:52.047 { 00:07:52.047 "dma_device_id": "system", 00:07:52.047 "dma_device_type": 1 00:07:52.047 }, 00:07:52.047 { 00:07:52.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.047 "dma_device_type": 2 00:07:52.047 }, 00:07:52.047 { 00:07:52.047 "dma_device_id": "system", 00:07:52.047 "dma_device_type": 1 00:07:52.047 }, 00:07:52.047 { 00:07:52.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.047 "dma_device_type": 2 00:07:52.047 } 00:07:52.047 ], 00:07:52.047 "driver_specific": { 00:07:52.047 "raid": { 00:07:52.047 "uuid": "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f", 00:07:52.047 "strip_size_kb": 0, 00:07:52.047 "state": "online", 00:07:52.047 "raid_level": "raid1", 00:07:52.047 "superblock": true, 00:07:52.047 "num_base_bdevs": 2, 00:07:52.047 "num_base_bdevs_discovered": 2, 00:07:52.047 "num_base_bdevs_operational": 2, 00:07:52.047 "base_bdevs_list": [ 00:07:52.047 { 00:07:52.047 "name": "pt1", 00:07:52.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:52.047 "is_configured": true, 00:07:52.047 "data_offset": 2048, 00:07:52.047 "data_size": 63488 00:07:52.047 }, 00:07:52.047 { 00:07:52.047 "name": "pt2", 00:07:52.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.047 "is_configured": true, 00:07:52.047 "data_offset": 2048, 00:07:52.047 "data_size": 63488 00:07:52.047 } 00:07:52.047 ] 00:07:52.047 } 00:07:52.047 } 00:07:52.047 }' 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:52.047 pt2' 00:07:52.047 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.308 [2024-11-26 12:51:09.840336] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f '!=' 7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f ']' 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.308 [2024-11-26 12:51:09.884021] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.308 "name": "raid_bdev1", 00:07:52.308 "uuid": "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f", 00:07:52.308 "strip_size_kb": 0, 00:07:52.308 "state": "online", 00:07:52.308 "raid_level": "raid1", 00:07:52.308 "superblock": true, 00:07:52.308 "num_base_bdevs": 2, 00:07:52.308 "num_base_bdevs_discovered": 1, 00:07:52.308 "num_base_bdevs_operational": 1, 00:07:52.308 "base_bdevs_list": [ 00:07:52.308 { 00:07:52.308 "name": null, 00:07:52.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.308 "is_configured": false, 00:07:52.308 "data_offset": 0, 00:07:52.308 "data_size": 63488 00:07:52.308 }, 00:07:52.308 { 00:07:52.308 "name": "pt2", 00:07:52.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.308 "is_configured": true, 00:07:52.308 "data_offset": 2048, 00:07:52.308 "data_size": 63488 00:07:52.308 } 00:07:52.308 ] 00:07:52.308 }' 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.308 12:51:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.879 [2024-11-26 12:51:10.323293] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.879 [2024-11-26 12:51:10.323321] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.879 [2024-11-26 12:51:10.323394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.879 [2024-11-26 12:51:10.323443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.879 [2024-11-26 12:51:10.323459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.879 [2024-11-26 12:51:10.395250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:52.879 [2024-11-26 12:51:10.395300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.879 [2024-11-26 12:51:10.395318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:52.879 [2024-11-26 12:51:10.395328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.879 [2024-11-26 12:51:10.397474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.879 [2024-11-26 12:51:10.397510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:52.879 [2024-11-26 12:51:10.397597] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:52.879 [2024-11-26 12:51:10.397625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:52.879 [2024-11-26 12:51:10.397695] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:52.879 [2024-11-26 12:51:10.397704] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:52.879 [2024-11-26 12:51:10.397908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.879 [2024-11-26 12:51:10.398032] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:52.879 [2024-11-26 12:51:10.398049] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:07:52.879 [2024-11-26 12:51:10.398149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.879 pt2 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.879 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.879 "name": "raid_bdev1", 00:07:52.879 "uuid": "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f", 00:07:52.879 "strip_size_kb": 0, 00:07:52.879 "state": "online", 00:07:52.879 "raid_level": "raid1", 00:07:52.879 "superblock": true, 00:07:52.879 "num_base_bdevs": 2, 00:07:52.879 "num_base_bdevs_discovered": 1, 00:07:52.879 "num_base_bdevs_operational": 1, 00:07:52.879 "base_bdevs_list": [ 00:07:52.879 { 00:07:52.879 "name": null, 00:07:52.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.879 "is_configured": false, 00:07:52.879 "data_offset": 2048, 00:07:52.879 "data_size": 63488 00:07:52.879 }, 00:07:52.879 { 00:07:52.879 "name": "pt2", 00:07:52.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.879 "is_configured": true, 00:07:52.879 "data_offset": 2048, 00:07:52.879 "data_size": 63488 00:07:52.879 } 00:07:52.879 ] 00:07:52.879 }' 00:07:52.880 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.880 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.140 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:53.140 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.140 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.401 [2024-11-26 12:51:10.818678] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.401 [2024-11-26 12:51:10.818706] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.401 [2024-11-26 12:51:10.818770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.401 [2024-11-26 12:51:10.818812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.401 [2024-11-26 12:51:10.818823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.401 [2024-11-26 12:51:10.878534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:53.401 [2024-11-26 12:51:10.878588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.401 [2024-11-26 12:51:10.878609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:53.401 [2024-11-26 12:51:10.878625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.401 [2024-11-26 12:51:10.880706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.401 [2024-11-26 12:51:10.880745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:53.401 [2024-11-26 12:51:10.880809] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:53.401 [2024-11-26 12:51:10.880847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:53.401 [2024-11-26 12:51:10.880935] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:53.401 [2024-11-26 12:51:10.880947] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.401 [2024-11-26 12:51:10.880961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:07:53.401 [2024-11-26 12:51:10.881017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:53.401 [2024-11-26 12:51:10.881086] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:53.401 [2024-11-26 12:51:10.881098] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:53.401 [2024-11-26 12:51:10.881320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:53.401 [2024-11-26 12:51:10.881429] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:53.401 [2024-11-26 12:51:10.881459] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:53.401 [2024-11-26 12:51:10.881562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.401 pt1 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.401 "name": "raid_bdev1", 00:07:53.401 "uuid": "7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f", 00:07:53.401 "strip_size_kb": 0, 00:07:53.401 "state": "online", 00:07:53.401 "raid_level": "raid1", 00:07:53.401 "superblock": true, 00:07:53.401 "num_base_bdevs": 2, 00:07:53.401 "num_base_bdevs_discovered": 1, 00:07:53.401 "num_base_bdevs_operational": 1, 00:07:53.401 "base_bdevs_list": [ 00:07:53.401 { 00:07:53.401 "name": null, 00:07:53.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.401 "is_configured": false, 00:07:53.401 "data_offset": 2048, 00:07:53.401 "data_size": 63488 00:07:53.401 }, 00:07:53.401 { 00:07:53.401 "name": "pt2", 00:07:53.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:53.401 "is_configured": true, 00:07:53.401 "data_offset": 2048, 00:07:53.401 "data_size": 63488 00:07:53.401 } 00:07:53.401 ] 00:07:53.401 }' 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.401 12:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.661 12:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:53.661 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.661 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.661 12:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:53.661 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.661 12:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:53.661 12:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:53.661 12:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:53.661 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.661 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.661 [2024-11-26 12:51:11.338010] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f '!=' 7cdc0f9e-c7a5-4cf0-9712-8b3da7621e1f ']' 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74691 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74691 ']' 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74691 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74691 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.922 killing process with pid 74691 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74691' 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74691 00:07:53.922 [2024-11-26 12:51:11.421996] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.922 [2024-11-26 12:51:11.422062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.922 [2024-11-26 12:51:11.422103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.922 [2024-11-26 12:51:11.422110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:53.922 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74691 00:07:53.922 [2024-11-26 12:51:11.444825] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.183 12:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:54.183 00:07:54.183 real 0m4.783s 00:07:54.183 user 0m7.773s 00:07:54.183 sys 0m1.017s 00:07:54.183 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.183 12:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.183 ************************************ 00:07:54.183 END TEST raid_superblock_test 00:07:54.183 ************************************ 00:07:54.183 12:51:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:54.183 12:51:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:54.183 12:51:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.183 12:51:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.183 ************************************ 00:07:54.183 START TEST raid_read_error_test 00:07:54.183 ************************************ 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vqmfbpvpkU 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75009 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75009 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75009 ']' 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.183 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.443 [2024-11-26 12:51:11.862314] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:54.443 [2024-11-26 12:51:11.862428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75009 ] 00:07:54.443 [2024-11-26 12:51:12.020418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.443 [2024-11-26 12:51:12.064673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.443 [2024-11-26 12:51:12.106606] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.443 [2024-11-26 12:51:12.106646] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.382 BaseBdev1_malloc 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.382 true 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.382 [2024-11-26 12:51:12.732856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:55.382 [2024-11-26 12:51:12.732914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.382 [2024-11-26 12:51:12.732939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:55.382 [2024-11-26 12:51:12.732948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.382 [2024-11-26 12:51:12.735026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.382 [2024-11-26 12:51:12.735064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:55.382 BaseBdev1 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.382 BaseBdev2_malloc 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.382 true 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.382 [2024-11-26 12:51:12.784031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:55.382 [2024-11-26 12:51:12.784096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.382 [2024-11-26 12:51:12.784121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:55.382 [2024-11-26 12:51:12.784134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.382 [2024-11-26 12:51:12.786842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.382 [2024-11-26 12:51:12.786889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:55.382 BaseBdev2 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.382 [2024-11-26 12:51:12.796007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.382 [2024-11-26 12:51:12.797779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.382 [2024-11-26 12:51:12.797949] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:55.382 [2024-11-26 12:51:12.797962] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:55.382 [2024-11-26 12:51:12.798208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:55.382 [2024-11-26 12:51:12.798344] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:55.382 [2024-11-26 12:51:12.798366] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:55.382 [2024-11-26 12:51:12.798490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.382 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.383 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.383 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.383 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.383 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.383 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.383 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.383 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.383 "name": "raid_bdev1", 00:07:55.383 "uuid": "ea0061bb-3673-40f1-b0de-239e37e6648c", 00:07:55.383 "strip_size_kb": 0, 00:07:55.383 "state": "online", 00:07:55.383 "raid_level": "raid1", 00:07:55.383 "superblock": true, 00:07:55.383 "num_base_bdevs": 2, 00:07:55.383 "num_base_bdevs_discovered": 2, 00:07:55.383 "num_base_bdevs_operational": 2, 00:07:55.383 "base_bdevs_list": [ 00:07:55.383 { 00:07:55.383 "name": "BaseBdev1", 00:07:55.383 "uuid": "a7ce57b9-cef3-5e54-99e5-012f1db24e05", 00:07:55.383 "is_configured": true, 00:07:55.383 "data_offset": 2048, 00:07:55.383 "data_size": 63488 00:07:55.383 }, 00:07:55.383 { 00:07:55.383 "name": "BaseBdev2", 00:07:55.383 "uuid": "b2baa832-e84d-5be7-ac9e-8166b0598261", 00:07:55.383 "is_configured": true, 00:07:55.383 "data_offset": 2048, 00:07:55.383 "data_size": 63488 00:07:55.383 } 00:07:55.383 ] 00:07:55.383 }' 00:07:55.383 12:51:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.383 12:51:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.642 12:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:55.642 12:51:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:55.902 [2024-11-26 12:51:13.331578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.843 "name": "raid_bdev1", 00:07:56.843 "uuid": "ea0061bb-3673-40f1-b0de-239e37e6648c", 00:07:56.843 "strip_size_kb": 0, 00:07:56.843 "state": "online", 00:07:56.843 "raid_level": "raid1", 00:07:56.843 "superblock": true, 00:07:56.843 "num_base_bdevs": 2, 00:07:56.843 "num_base_bdevs_discovered": 2, 00:07:56.843 "num_base_bdevs_operational": 2, 00:07:56.843 "base_bdevs_list": [ 00:07:56.843 { 00:07:56.843 "name": "BaseBdev1", 00:07:56.843 "uuid": "a7ce57b9-cef3-5e54-99e5-012f1db24e05", 00:07:56.843 "is_configured": true, 00:07:56.843 "data_offset": 2048, 00:07:56.843 "data_size": 63488 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "name": "BaseBdev2", 00:07:56.843 "uuid": "b2baa832-e84d-5be7-ac9e-8166b0598261", 00:07:56.843 "is_configured": true, 00:07:56.843 "data_offset": 2048, 00:07:56.843 "data_size": 63488 00:07:56.843 } 00:07:56.843 ] 00:07:56.843 }' 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.843 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.103 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.103 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.103 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.103 [2024-11-26 12:51:14.701479] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.103 [2024-11-26 12:51:14.701522] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.103 [2024-11-26 12:51:14.703999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.103 [2024-11-26 12:51:14.704052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.103 [2024-11-26 12:51:14.704134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.103 [2024-11-26 12:51:14.704144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:57.103 { 00:07:57.103 "results": [ 00:07:57.103 { 00:07:57.103 "job": "raid_bdev1", 00:07:57.103 "core_mask": "0x1", 00:07:57.103 "workload": "randrw", 00:07:57.103 "percentage": 50, 00:07:57.103 "status": "finished", 00:07:57.103 "queue_depth": 1, 00:07:57.103 "io_size": 131072, 00:07:57.103 "runtime": 1.370801, 00:07:57.103 "iops": 20581.39729982689, 00:07:57.103 "mibps": 2572.6746624783614, 00:07:57.103 "io_failed": 0, 00:07:57.103 "io_timeout": 0, 00:07:57.103 "avg_latency_us": 46.19181971456375, 00:07:57.103 "min_latency_us": 21.351965065502185, 00:07:57.103 "max_latency_us": 1395.1441048034935 00:07:57.103 } 00:07:57.103 ], 00:07:57.103 "core_count": 1 00:07:57.103 } 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75009 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75009 ']' 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75009 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75009 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75009' 00:07:57.104 killing process with pid 75009 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75009 00:07:57.104 [2024-11-26 12:51:14.752623] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.104 12:51:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75009 00:07:57.104 [2024-11-26 12:51:14.768328] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.363 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:57.363 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vqmfbpvpkU 00:07:57.363 12:51:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:57.363 12:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:57.363 12:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:57.363 12:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.363 12:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:57.363 12:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:57.363 00:07:57.363 real 0m3.244s 00:07:57.363 user 0m4.119s 00:07:57.363 sys 0m0.514s 00:07:57.363 12:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.363 12:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.363 ************************************ 00:07:57.363 END TEST raid_read_error_test 00:07:57.363 ************************************ 00:07:57.625 12:51:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:57.625 12:51:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:57.625 12:51:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.625 12:51:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.625 ************************************ 00:07:57.625 START TEST raid_write_error_test 00:07:57.625 ************************************ 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SQ2lamU4th 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75139 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75139 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75139 ']' 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.625 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.625 [2024-11-26 12:51:15.185943] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:57.625 [2024-11-26 12:51:15.186075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75139 ] 00:07:57.885 [2024-11-26 12:51:15.348865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.885 [2024-11-26 12:51:15.393628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.885 [2024-11-26 12:51:15.435043] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.885 [2024-11-26 12:51:15.435194] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.455 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.455 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:58.455 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.455 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.455 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.455 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.455 BaseBdev1_malloc 00:07:58.455 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.455 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.456 true 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.456 [2024-11-26 12:51:16.021007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.456 [2024-11-26 12:51:16.021171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.456 [2024-11-26 12:51:16.021233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:58.456 [2024-11-26 12:51:16.021272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.456 [2024-11-26 12:51:16.023452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.456 [2024-11-26 12:51:16.023524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.456 BaseBdev1 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.456 BaseBdev2_malloc 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.456 true 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.456 [2024-11-26 12:51:16.073029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:58.456 [2024-11-26 12:51:16.073139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.456 [2024-11-26 12:51:16.073201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:58.456 [2024-11-26 12:51:16.073249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.456 [2024-11-26 12:51:16.075302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.456 [2024-11-26 12:51:16.075375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:58.456 BaseBdev2 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.456 [2024-11-26 12:51:16.085045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.456 [2024-11-26 12:51:16.086878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.456 [2024-11-26 12:51:16.087113] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:58.456 [2024-11-26 12:51:16.087169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.456 [2024-11-26 12:51:16.087439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:58.456 [2024-11-26 12:51:16.087608] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:58.456 [2024-11-26 12:51:16.087654] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:58.456 [2024-11-26 12:51:16.087801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.456 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.715 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.715 "name": "raid_bdev1", 00:07:58.715 "uuid": "fb2ec3f8-a682-4868-a5b1-f399e4560f8b", 00:07:58.715 "strip_size_kb": 0, 00:07:58.715 "state": "online", 00:07:58.715 "raid_level": "raid1", 00:07:58.715 "superblock": true, 00:07:58.715 "num_base_bdevs": 2, 00:07:58.715 "num_base_bdevs_discovered": 2, 00:07:58.715 "num_base_bdevs_operational": 2, 00:07:58.715 "base_bdevs_list": [ 00:07:58.715 { 00:07:58.715 "name": "BaseBdev1", 00:07:58.715 "uuid": "ba747419-882f-500d-b4b5-82c07181ef5c", 00:07:58.715 "is_configured": true, 00:07:58.715 "data_offset": 2048, 00:07:58.715 "data_size": 63488 00:07:58.715 }, 00:07:58.715 { 00:07:58.715 "name": "BaseBdev2", 00:07:58.715 "uuid": "343de72e-ed9f-5bc4-8c43-f0285b0ed692", 00:07:58.715 "is_configured": true, 00:07:58.715 "data_offset": 2048, 00:07:58.715 "data_size": 63488 00:07:58.715 } 00:07:58.715 ] 00:07:58.715 }' 00:07:58.715 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.715 12:51:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.983 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:58.983 12:51:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:58.983 [2024-11-26 12:51:16.632477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.934 [2024-11-26 12:51:17.549074] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:59.934 [2024-11-26 12:51:17.549259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.934 [2024-11-26 12:51:17.549476] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.934 "name": "raid_bdev1", 00:07:59.934 "uuid": "fb2ec3f8-a682-4868-a5b1-f399e4560f8b", 00:07:59.934 "strip_size_kb": 0, 00:07:59.934 "state": "online", 00:07:59.934 "raid_level": "raid1", 00:07:59.934 "superblock": true, 00:07:59.934 "num_base_bdevs": 2, 00:07:59.934 "num_base_bdevs_discovered": 1, 00:07:59.934 "num_base_bdevs_operational": 1, 00:07:59.934 "base_bdevs_list": [ 00:07:59.934 { 00:07:59.934 "name": null, 00:07:59.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.934 "is_configured": false, 00:07:59.934 "data_offset": 0, 00:07:59.934 "data_size": 63488 00:07:59.934 }, 00:07:59.934 { 00:07:59.934 "name": "BaseBdev2", 00:07:59.934 "uuid": "343de72e-ed9f-5bc4-8c43-f0285b0ed692", 00:07:59.934 "is_configured": true, 00:07:59.934 "data_offset": 2048, 00:07:59.934 "data_size": 63488 00:07:59.934 } 00:07:59.934 ] 00:07:59.934 }' 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.934 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.504 [2024-11-26 12:51:17.950801] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.504 [2024-11-26 12:51:17.950928] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.504 [2024-11-26 12:51:17.953489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.504 [2024-11-26 12:51:17.953573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.504 [2024-11-26 12:51:17.953641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.504 [2024-11-26 12:51:17.953702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:00.504 { 00:08:00.504 "results": [ 00:08:00.504 { 00:08:00.504 "job": "raid_bdev1", 00:08:00.504 "core_mask": "0x1", 00:08:00.504 "workload": "randrw", 00:08:00.504 "percentage": 50, 00:08:00.504 "status": "finished", 00:08:00.504 "queue_depth": 1, 00:08:00.504 "io_size": 131072, 00:08:00.504 "runtime": 1.319143, 00:08:00.504 "iops": 23785.139291191328, 00:08:00.504 "mibps": 2973.142411398916, 00:08:00.504 "io_failed": 0, 00:08:00.504 "io_timeout": 0, 00:08:00.504 "avg_latency_us": 39.575780113969124, 00:08:00.504 "min_latency_us": 21.016593886462882, 00:08:00.504 "max_latency_us": 1323.598253275109 00:08:00.504 } 00:08:00.504 ], 00:08:00.504 "core_count": 1 00:08:00.504 } 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75139 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75139 ']' 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75139 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75139 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.504 killing process with pid 75139 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75139' 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75139 00:08:00.504 [2024-11-26 12:51:17.989885] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.504 12:51:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75139 00:08:00.504 [2024-11-26 12:51:18.005230] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.765 12:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SQ2lamU4th 00:08:00.765 12:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:00.765 12:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:00.765 12:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:00.765 12:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:00.765 12:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.765 12:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:00.765 12:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:00.765 00:08:00.765 real 0m3.176s 00:08:00.765 user 0m3.968s 00:08:00.765 sys 0m0.531s 00:08:00.765 12:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.765 ************************************ 00:08:00.765 END TEST raid_write_error_test 00:08:00.765 ************************************ 00:08:00.765 12:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.765 12:51:18 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:00.765 12:51:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:00.765 12:51:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:00.765 12:51:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:00.765 12:51:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.765 12:51:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.765 ************************************ 00:08:00.765 START TEST raid_state_function_test 00:08:00.765 ************************************ 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:00.765 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75266 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.766 Process raid pid: 75266 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75266' 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75266 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75266 ']' 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.766 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.766 [2024-11-26 12:51:18.426010] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:00.766 [2024-11-26 12:51:18.426227] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.027 [2024-11-26 12:51:18.587075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.027 [2024-11-26 12:51:18.631044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.027 [2024-11-26 12:51:18.672498] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.027 [2024-11-26 12:51:18.672619] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.600 [2024-11-26 12:51:19.253303] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.600 [2024-11-26 12:51:19.253415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.600 [2024-11-26 12:51:19.253452] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.600 [2024-11-26 12:51:19.253475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.600 [2024-11-26 12:51:19.253509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:01.600 [2024-11-26 12:51:19.253536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.600 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.861 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.861 "name": "Existed_Raid", 00:08:01.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.861 "strip_size_kb": 64, 00:08:01.861 "state": "configuring", 00:08:01.861 "raid_level": "raid0", 00:08:01.861 "superblock": false, 00:08:01.861 "num_base_bdevs": 3, 00:08:01.861 "num_base_bdevs_discovered": 0, 00:08:01.861 "num_base_bdevs_operational": 3, 00:08:01.861 "base_bdevs_list": [ 00:08:01.861 { 00:08:01.861 "name": "BaseBdev1", 00:08:01.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.861 "is_configured": false, 00:08:01.861 "data_offset": 0, 00:08:01.861 "data_size": 0 00:08:01.861 }, 00:08:01.861 { 00:08:01.861 "name": "BaseBdev2", 00:08:01.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.862 "is_configured": false, 00:08:01.862 "data_offset": 0, 00:08:01.862 "data_size": 0 00:08:01.862 }, 00:08:01.862 { 00:08:01.862 "name": "BaseBdev3", 00:08:01.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.862 "is_configured": false, 00:08:01.862 "data_offset": 0, 00:08:01.862 "data_size": 0 00:08:01.862 } 00:08:01.862 ] 00:08:01.862 }' 00:08:01.862 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.862 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.121 [2024-11-26 12:51:19.660519] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.121 [2024-11-26 12:51:19.660611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.121 [2024-11-26 12:51:19.672520] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.121 [2024-11-26 12:51:19.672604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.121 [2024-11-26 12:51:19.672630] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.121 [2024-11-26 12:51:19.672653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.121 [2024-11-26 12:51:19.672671] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:02.121 [2024-11-26 12:51:19.672690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.121 [2024-11-26 12:51:19.693330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.121 BaseBdev1 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.121 [ 00:08:02.121 { 00:08:02.121 "name": "BaseBdev1", 00:08:02.121 "aliases": [ 00:08:02.121 "b89baccf-3615-495a-b409-c446476d9ee4" 00:08:02.121 ], 00:08:02.121 "product_name": "Malloc disk", 00:08:02.121 "block_size": 512, 00:08:02.121 "num_blocks": 65536, 00:08:02.121 "uuid": "b89baccf-3615-495a-b409-c446476d9ee4", 00:08:02.121 "assigned_rate_limits": { 00:08:02.121 "rw_ios_per_sec": 0, 00:08:02.121 "rw_mbytes_per_sec": 0, 00:08:02.121 "r_mbytes_per_sec": 0, 00:08:02.121 "w_mbytes_per_sec": 0 00:08:02.121 }, 00:08:02.121 "claimed": true, 00:08:02.121 "claim_type": "exclusive_write", 00:08:02.121 "zoned": false, 00:08:02.121 "supported_io_types": { 00:08:02.121 "read": true, 00:08:02.121 "write": true, 00:08:02.121 "unmap": true, 00:08:02.121 "flush": true, 00:08:02.121 "reset": true, 00:08:02.121 "nvme_admin": false, 00:08:02.121 "nvme_io": false, 00:08:02.121 "nvme_io_md": false, 00:08:02.121 "write_zeroes": true, 00:08:02.121 "zcopy": true, 00:08:02.121 "get_zone_info": false, 00:08:02.121 "zone_management": false, 00:08:02.121 "zone_append": false, 00:08:02.121 "compare": false, 00:08:02.121 "compare_and_write": false, 00:08:02.121 "abort": true, 00:08:02.121 "seek_hole": false, 00:08:02.121 "seek_data": false, 00:08:02.121 "copy": true, 00:08:02.121 "nvme_iov_md": false 00:08:02.121 }, 00:08:02.121 "memory_domains": [ 00:08:02.121 { 00:08:02.121 "dma_device_id": "system", 00:08:02.121 "dma_device_type": 1 00:08:02.121 }, 00:08:02.121 { 00:08:02.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.121 "dma_device_type": 2 00:08:02.121 } 00:08:02.121 ], 00:08:02.121 "driver_specific": {} 00:08:02.121 } 00:08:02.121 ] 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.121 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.122 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.122 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.122 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.122 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.122 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.122 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.122 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.122 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.122 "name": "Existed_Raid", 00:08:02.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.122 "strip_size_kb": 64, 00:08:02.122 "state": "configuring", 00:08:02.122 "raid_level": "raid0", 00:08:02.122 "superblock": false, 00:08:02.122 "num_base_bdevs": 3, 00:08:02.122 "num_base_bdevs_discovered": 1, 00:08:02.122 "num_base_bdevs_operational": 3, 00:08:02.122 "base_bdevs_list": [ 00:08:02.122 { 00:08:02.122 "name": "BaseBdev1", 00:08:02.122 "uuid": "b89baccf-3615-495a-b409-c446476d9ee4", 00:08:02.122 "is_configured": true, 00:08:02.122 "data_offset": 0, 00:08:02.122 "data_size": 65536 00:08:02.122 }, 00:08:02.122 { 00:08:02.122 "name": "BaseBdev2", 00:08:02.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.122 "is_configured": false, 00:08:02.122 "data_offset": 0, 00:08:02.122 "data_size": 0 00:08:02.122 }, 00:08:02.122 { 00:08:02.122 "name": "BaseBdev3", 00:08:02.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.122 "is_configured": false, 00:08:02.122 "data_offset": 0, 00:08:02.122 "data_size": 0 00:08:02.122 } 00:08:02.122 ] 00:08:02.122 }' 00:08:02.122 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.122 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.692 [2024-11-26 12:51:20.116608] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.692 [2024-11-26 12:51:20.116694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.692 [2024-11-26 12:51:20.128636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.692 [2024-11-26 12:51:20.130516] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.692 [2024-11-26 12:51:20.130604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.692 [2024-11-26 12:51:20.130631] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:02.692 [2024-11-26 12:51:20.130655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.692 "name": "Existed_Raid", 00:08:02.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.692 "strip_size_kb": 64, 00:08:02.692 "state": "configuring", 00:08:02.692 "raid_level": "raid0", 00:08:02.692 "superblock": false, 00:08:02.692 "num_base_bdevs": 3, 00:08:02.692 "num_base_bdevs_discovered": 1, 00:08:02.692 "num_base_bdevs_operational": 3, 00:08:02.692 "base_bdevs_list": [ 00:08:02.692 { 00:08:02.692 "name": "BaseBdev1", 00:08:02.692 "uuid": "b89baccf-3615-495a-b409-c446476d9ee4", 00:08:02.692 "is_configured": true, 00:08:02.692 "data_offset": 0, 00:08:02.692 "data_size": 65536 00:08:02.692 }, 00:08:02.692 { 00:08:02.692 "name": "BaseBdev2", 00:08:02.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.692 "is_configured": false, 00:08:02.692 "data_offset": 0, 00:08:02.692 "data_size": 0 00:08:02.692 }, 00:08:02.692 { 00:08:02.692 "name": "BaseBdev3", 00:08:02.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.692 "is_configured": false, 00:08:02.692 "data_offset": 0, 00:08:02.692 "data_size": 0 00:08:02.692 } 00:08:02.692 ] 00:08:02.692 }' 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.692 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.952 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:02.952 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.952 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.952 [2024-11-26 12:51:20.544854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.953 BaseBdev2 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.953 [ 00:08:02.953 { 00:08:02.953 "name": "BaseBdev2", 00:08:02.953 "aliases": [ 00:08:02.953 "3d00785f-9455-43a4-a7a6-bf0f160082b8" 00:08:02.953 ], 00:08:02.953 "product_name": "Malloc disk", 00:08:02.953 "block_size": 512, 00:08:02.953 "num_blocks": 65536, 00:08:02.953 "uuid": "3d00785f-9455-43a4-a7a6-bf0f160082b8", 00:08:02.953 "assigned_rate_limits": { 00:08:02.953 "rw_ios_per_sec": 0, 00:08:02.953 "rw_mbytes_per_sec": 0, 00:08:02.953 "r_mbytes_per_sec": 0, 00:08:02.953 "w_mbytes_per_sec": 0 00:08:02.953 }, 00:08:02.953 "claimed": true, 00:08:02.953 "claim_type": "exclusive_write", 00:08:02.953 "zoned": false, 00:08:02.953 "supported_io_types": { 00:08:02.953 "read": true, 00:08:02.953 "write": true, 00:08:02.953 "unmap": true, 00:08:02.953 "flush": true, 00:08:02.953 "reset": true, 00:08:02.953 "nvme_admin": false, 00:08:02.953 "nvme_io": false, 00:08:02.953 "nvme_io_md": false, 00:08:02.953 "write_zeroes": true, 00:08:02.953 "zcopy": true, 00:08:02.953 "get_zone_info": false, 00:08:02.953 "zone_management": false, 00:08:02.953 "zone_append": false, 00:08:02.953 "compare": false, 00:08:02.953 "compare_and_write": false, 00:08:02.953 "abort": true, 00:08:02.953 "seek_hole": false, 00:08:02.953 "seek_data": false, 00:08:02.953 "copy": true, 00:08:02.953 "nvme_iov_md": false 00:08:02.953 }, 00:08:02.953 "memory_domains": [ 00:08:02.953 { 00:08:02.953 "dma_device_id": "system", 00:08:02.953 "dma_device_type": 1 00:08:02.953 }, 00:08:02.953 { 00:08:02.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.953 "dma_device_type": 2 00:08:02.953 } 00:08:02.953 ], 00:08:02.953 "driver_specific": {} 00:08:02.953 } 00:08:02.953 ] 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.953 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.212 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.212 "name": "Existed_Raid", 00:08:03.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.212 "strip_size_kb": 64, 00:08:03.212 "state": "configuring", 00:08:03.212 "raid_level": "raid0", 00:08:03.212 "superblock": false, 00:08:03.212 "num_base_bdevs": 3, 00:08:03.212 "num_base_bdevs_discovered": 2, 00:08:03.212 "num_base_bdevs_operational": 3, 00:08:03.212 "base_bdevs_list": [ 00:08:03.212 { 00:08:03.212 "name": "BaseBdev1", 00:08:03.212 "uuid": "b89baccf-3615-495a-b409-c446476d9ee4", 00:08:03.212 "is_configured": true, 00:08:03.212 "data_offset": 0, 00:08:03.213 "data_size": 65536 00:08:03.213 }, 00:08:03.213 { 00:08:03.213 "name": "BaseBdev2", 00:08:03.213 "uuid": "3d00785f-9455-43a4-a7a6-bf0f160082b8", 00:08:03.213 "is_configured": true, 00:08:03.213 "data_offset": 0, 00:08:03.213 "data_size": 65536 00:08:03.213 }, 00:08:03.213 { 00:08:03.213 "name": "BaseBdev3", 00:08:03.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.213 "is_configured": false, 00:08:03.213 "data_offset": 0, 00:08:03.213 "data_size": 0 00:08:03.213 } 00:08:03.213 ] 00:08:03.213 }' 00:08:03.213 12:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.213 12:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.473 [2024-11-26 12:51:21.022886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:03.473 [2024-11-26 12:51:21.023008] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:03.473 [2024-11-26 12:51:21.023037] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:03.473 [2024-11-26 12:51:21.023401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:03.473 [2024-11-26 12:51:21.023582] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:03.473 [2024-11-26 12:51:21.023624] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:03.473 [2024-11-26 12:51:21.023866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.473 BaseBdev3 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.473 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.473 [ 00:08:03.473 { 00:08:03.473 "name": "BaseBdev3", 00:08:03.473 "aliases": [ 00:08:03.473 "c33e3c5b-3ea5-4427-813a-7b7d8f1111d7" 00:08:03.473 ], 00:08:03.473 "product_name": "Malloc disk", 00:08:03.473 "block_size": 512, 00:08:03.473 "num_blocks": 65536, 00:08:03.473 "uuid": "c33e3c5b-3ea5-4427-813a-7b7d8f1111d7", 00:08:03.473 "assigned_rate_limits": { 00:08:03.473 "rw_ios_per_sec": 0, 00:08:03.473 "rw_mbytes_per_sec": 0, 00:08:03.473 "r_mbytes_per_sec": 0, 00:08:03.473 "w_mbytes_per_sec": 0 00:08:03.473 }, 00:08:03.473 "claimed": true, 00:08:03.473 "claim_type": "exclusive_write", 00:08:03.473 "zoned": false, 00:08:03.473 "supported_io_types": { 00:08:03.473 "read": true, 00:08:03.473 "write": true, 00:08:03.473 "unmap": true, 00:08:03.473 "flush": true, 00:08:03.473 "reset": true, 00:08:03.473 "nvme_admin": false, 00:08:03.473 "nvme_io": false, 00:08:03.473 "nvme_io_md": false, 00:08:03.473 "write_zeroes": true, 00:08:03.473 "zcopy": true, 00:08:03.473 "get_zone_info": false, 00:08:03.473 "zone_management": false, 00:08:03.473 "zone_append": false, 00:08:03.473 "compare": false, 00:08:03.473 "compare_and_write": false, 00:08:03.473 "abort": true, 00:08:03.473 "seek_hole": false, 00:08:03.473 "seek_data": false, 00:08:03.473 "copy": true, 00:08:03.473 "nvme_iov_md": false 00:08:03.473 }, 00:08:03.473 "memory_domains": [ 00:08:03.473 { 00:08:03.473 "dma_device_id": "system", 00:08:03.473 "dma_device_type": 1 00:08:03.473 }, 00:08:03.473 { 00:08:03.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.473 "dma_device_type": 2 00:08:03.473 } 00:08:03.473 ], 00:08:03.473 "driver_specific": {} 00:08:03.473 } 00:08:03.474 ] 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.474 "name": "Existed_Raid", 00:08:03.474 "uuid": "58f7cb9e-5e5a-4474-973b-f835244a1278", 00:08:03.474 "strip_size_kb": 64, 00:08:03.474 "state": "online", 00:08:03.474 "raid_level": "raid0", 00:08:03.474 "superblock": false, 00:08:03.474 "num_base_bdevs": 3, 00:08:03.474 "num_base_bdevs_discovered": 3, 00:08:03.474 "num_base_bdevs_operational": 3, 00:08:03.474 "base_bdevs_list": [ 00:08:03.474 { 00:08:03.474 "name": "BaseBdev1", 00:08:03.474 "uuid": "b89baccf-3615-495a-b409-c446476d9ee4", 00:08:03.474 "is_configured": true, 00:08:03.474 "data_offset": 0, 00:08:03.474 "data_size": 65536 00:08:03.474 }, 00:08:03.474 { 00:08:03.474 "name": "BaseBdev2", 00:08:03.474 "uuid": "3d00785f-9455-43a4-a7a6-bf0f160082b8", 00:08:03.474 "is_configured": true, 00:08:03.474 "data_offset": 0, 00:08:03.474 "data_size": 65536 00:08:03.474 }, 00:08:03.474 { 00:08:03.474 "name": "BaseBdev3", 00:08:03.474 "uuid": "c33e3c5b-3ea5-4427-813a-7b7d8f1111d7", 00:08:03.474 "is_configured": true, 00:08:03.474 "data_offset": 0, 00:08:03.474 "data_size": 65536 00:08:03.474 } 00:08:03.474 ] 00:08:03.474 }' 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.474 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.044 [2024-11-26 12:51:21.518344] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.044 "name": "Existed_Raid", 00:08:04.044 "aliases": [ 00:08:04.044 "58f7cb9e-5e5a-4474-973b-f835244a1278" 00:08:04.044 ], 00:08:04.044 "product_name": "Raid Volume", 00:08:04.044 "block_size": 512, 00:08:04.044 "num_blocks": 196608, 00:08:04.044 "uuid": "58f7cb9e-5e5a-4474-973b-f835244a1278", 00:08:04.044 "assigned_rate_limits": { 00:08:04.044 "rw_ios_per_sec": 0, 00:08:04.044 "rw_mbytes_per_sec": 0, 00:08:04.044 "r_mbytes_per_sec": 0, 00:08:04.044 "w_mbytes_per_sec": 0 00:08:04.044 }, 00:08:04.044 "claimed": false, 00:08:04.044 "zoned": false, 00:08:04.044 "supported_io_types": { 00:08:04.044 "read": true, 00:08:04.044 "write": true, 00:08:04.044 "unmap": true, 00:08:04.044 "flush": true, 00:08:04.044 "reset": true, 00:08:04.044 "nvme_admin": false, 00:08:04.044 "nvme_io": false, 00:08:04.044 "nvme_io_md": false, 00:08:04.044 "write_zeroes": true, 00:08:04.044 "zcopy": false, 00:08:04.044 "get_zone_info": false, 00:08:04.044 "zone_management": false, 00:08:04.044 "zone_append": false, 00:08:04.044 "compare": false, 00:08:04.044 "compare_and_write": false, 00:08:04.044 "abort": false, 00:08:04.044 "seek_hole": false, 00:08:04.044 "seek_data": false, 00:08:04.044 "copy": false, 00:08:04.044 "nvme_iov_md": false 00:08:04.044 }, 00:08:04.044 "memory_domains": [ 00:08:04.044 { 00:08:04.044 "dma_device_id": "system", 00:08:04.044 "dma_device_type": 1 00:08:04.044 }, 00:08:04.044 { 00:08:04.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.044 "dma_device_type": 2 00:08:04.044 }, 00:08:04.044 { 00:08:04.044 "dma_device_id": "system", 00:08:04.044 "dma_device_type": 1 00:08:04.044 }, 00:08:04.044 { 00:08:04.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.044 "dma_device_type": 2 00:08:04.044 }, 00:08:04.044 { 00:08:04.044 "dma_device_id": "system", 00:08:04.044 "dma_device_type": 1 00:08:04.044 }, 00:08:04.044 { 00:08:04.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.044 "dma_device_type": 2 00:08:04.044 } 00:08:04.044 ], 00:08:04.044 "driver_specific": { 00:08:04.044 "raid": { 00:08:04.044 "uuid": "58f7cb9e-5e5a-4474-973b-f835244a1278", 00:08:04.044 "strip_size_kb": 64, 00:08:04.044 "state": "online", 00:08:04.044 "raid_level": "raid0", 00:08:04.044 "superblock": false, 00:08:04.044 "num_base_bdevs": 3, 00:08:04.044 "num_base_bdevs_discovered": 3, 00:08:04.044 "num_base_bdevs_operational": 3, 00:08:04.044 "base_bdevs_list": [ 00:08:04.044 { 00:08:04.044 "name": "BaseBdev1", 00:08:04.044 "uuid": "b89baccf-3615-495a-b409-c446476d9ee4", 00:08:04.044 "is_configured": true, 00:08:04.044 "data_offset": 0, 00:08:04.044 "data_size": 65536 00:08:04.044 }, 00:08:04.044 { 00:08:04.044 "name": "BaseBdev2", 00:08:04.044 "uuid": "3d00785f-9455-43a4-a7a6-bf0f160082b8", 00:08:04.044 "is_configured": true, 00:08:04.044 "data_offset": 0, 00:08:04.044 "data_size": 65536 00:08:04.044 }, 00:08:04.044 { 00:08:04.044 "name": "BaseBdev3", 00:08:04.044 "uuid": "c33e3c5b-3ea5-4427-813a-7b7d8f1111d7", 00:08:04.044 "is_configured": true, 00:08:04.044 "data_offset": 0, 00:08:04.044 "data_size": 65536 00:08:04.044 } 00:08:04.044 ] 00:08:04.044 } 00:08:04.044 } 00:08:04.044 }' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:04.044 BaseBdev2 00:08:04.044 BaseBdev3' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.044 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.305 [2024-11-26 12:51:21.741727] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:04.305 [2024-11-26 12:51:21.741798] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.305 [2024-11-26 12:51:21.741890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.305 "name": "Existed_Raid", 00:08:04.305 "uuid": "58f7cb9e-5e5a-4474-973b-f835244a1278", 00:08:04.305 "strip_size_kb": 64, 00:08:04.305 "state": "offline", 00:08:04.305 "raid_level": "raid0", 00:08:04.305 "superblock": false, 00:08:04.305 "num_base_bdevs": 3, 00:08:04.305 "num_base_bdevs_discovered": 2, 00:08:04.305 "num_base_bdevs_operational": 2, 00:08:04.305 "base_bdevs_list": [ 00:08:04.305 { 00:08:04.305 "name": null, 00:08:04.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.305 "is_configured": false, 00:08:04.305 "data_offset": 0, 00:08:04.305 "data_size": 65536 00:08:04.305 }, 00:08:04.305 { 00:08:04.305 "name": "BaseBdev2", 00:08:04.305 "uuid": "3d00785f-9455-43a4-a7a6-bf0f160082b8", 00:08:04.305 "is_configured": true, 00:08:04.305 "data_offset": 0, 00:08:04.305 "data_size": 65536 00:08:04.305 }, 00:08:04.305 { 00:08:04.305 "name": "BaseBdev3", 00:08:04.305 "uuid": "c33e3c5b-3ea5-4427-813a-7b7d8f1111d7", 00:08:04.305 "is_configured": true, 00:08:04.305 "data_offset": 0, 00:08:04.305 "data_size": 65536 00:08:04.305 } 00:08:04.305 ] 00:08:04.305 }' 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.305 12:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.566 [2024-11-26 12:51:22.228257] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:04.566 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.828 [2024-11-26 12:51:22.279435] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:04.828 [2024-11-26 12:51:22.279529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.828 BaseBdev2 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.828 [ 00:08:04.828 { 00:08:04.828 "name": "BaseBdev2", 00:08:04.828 "aliases": [ 00:08:04.828 "16bea7a2-fb99-4888-86a9-2c3f6258e74c" 00:08:04.828 ], 00:08:04.828 "product_name": "Malloc disk", 00:08:04.828 "block_size": 512, 00:08:04.828 "num_blocks": 65536, 00:08:04.828 "uuid": "16bea7a2-fb99-4888-86a9-2c3f6258e74c", 00:08:04.828 "assigned_rate_limits": { 00:08:04.828 "rw_ios_per_sec": 0, 00:08:04.828 "rw_mbytes_per_sec": 0, 00:08:04.828 "r_mbytes_per_sec": 0, 00:08:04.828 "w_mbytes_per_sec": 0 00:08:04.828 }, 00:08:04.828 "claimed": false, 00:08:04.828 "zoned": false, 00:08:04.828 "supported_io_types": { 00:08:04.828 "read": true, 00:08:04.828 "write": true, 00:08:04.828 "unmap": true, 00:08:04.828 "flush": true, 00:08:04.828 "reset": true, 00:08:04.828 "nvme_admin": false, 00:08:04.828 "nvme_io": false, 00:08:04.828 "nvme_io_md": false, 00:08:04.828 "write_zeroes": true, 00:08:04.828 "zcopy": true, 00:08:04.828 "get_zone_info": false, 00:08:04.828 "zone_management": false, 00:08:04.828 "zone_append": false, 00:08:04.828 "compare": false, 00:08:04.828 "compare_and_write": false, 00:08:04.828 "abort": true, 00:08:04.828 "seek_hole": false, 00:08:04.828 "seek_data": false, 00:08:04.828 "copy": true, 00:08:04.828 "nvme_iov_md": false 00:08:04.828 }, 00:08:04.828 "memory_domains": [ 00:08:04.828 { 00:08:04.828 "dma_device_id": "system", 00:08:04.828 "dma_device_type": 1 00:08:04.828 }, 00:08:04.828 { 00:08:04.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.828 "dma_device_type": 2 00:08:04.828 } 00:08:04.828 ], 00:08:04.828 "driver_specific": {} 00:08:04.828 } 00:08:04.828 ] 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.828 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 BaseBdev3 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 [ 00:08:04.829 { 00:08:04.829 "name": "BaseBdev3", 00:08:04.829 "aliases": [ 00:08:04.829 "6ef7e69b-081e-485c-bdf3-3c91cc7beb67" 00:08:04.829 ], 00:08:04.829 "product_name": "Malloc disk", 00:08:04.829 "block_size": 512, 00:08:04.829 "num_blocks": 65536, 00:08:04.829 "uuid": "6ef7e69b-081e-485c-bdf3-3c91cc7beb67", 00:08:04.829 "assigned_rate_limits": { 00:08:04.829 "rw_ios_per_sec": 0, 00:08:04.829 "rw_mbytes_per_sec": 0, 00:08:04.829 "r_mbytes_per_sec": 0, 00:08:04.829 "w_mbytes_per_sec": 0 00:08:04.829 }, 00:08:04.829 "claimed": false, 00:08:04.829 "zoned": false, 00:08:04.829 "supported_io_types": { 00:08:04.829 "read": true, 00:08:04.829 "write": true, 00:08:04.829 "unmap": true, 00:08:04.829 "flush": true, 00:08:04.829 "reset": true, 00:08:04.829 "nvme_admin": false, 00:08:04.829 "nvme_io": false, 00:08:04.829 "nvme_io_md": false, 00:08:04.829 "write_zeroes": true, 00:08:04.829 "zcopy": true, 00:08:04.829 "get_zone_info": false, 00:08:04.829 "zone_management": false, 00:08:04.829 "zone_append": false, 00:08:04.829 "compare": false, 00:08:04.829 "compare_and_write": false, 00:08:04.829 "abort": true, 00:08:04.829 "seek_hole": false, 00:08:04.829 "seek_data": false, 00:08:04.829 "copy": true, 00:08:04.829 "nvme_iov_md": false 00:08:04.829 }, 00:08:04.829 "memory_domains": [ 00:08:04.829 { 00:08:04.829 "dma_device_id": "system", 00:08:04.829 "dma_device_type": 1 00:08:04.829 }, 00:08:04.829 { 00:08:04.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.829 "dma_device_type": 2 00:08:04.829 } 00:08:04.829 ], 00:08:04.829 "driver_specific": {} 00:08:04.829 } 00:08:04.829 ] 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 [2024-11-26 12:51:22.437937] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.829 [2024-11-26 12:51:22.438059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.829 [2024-11-26 12:51:22.438101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.829 [2024-11-26 12:51:22.439945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.829 "name": "Existed_Raid", 00:08:04.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.829 "strip_size_kb": 64, 00:08:04.829 "state": "configuring", 00:08:04.829 "raid_level": "raid0", 00:08:04.829 "superblock": false, 00:08:04.829 "num_base_bdevs": 3, 00:08:04.829 "num_base_bdevs_discovered": 2, 00:08:04.829 "num_base_bdevs_operational": 3, 00:08:04.829 "base_bdevs_list": [ 00:08:04.829 { 00:08:04.829 "name": "BaseBdev1", 00:08:04.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.829 "is_configured": false, 00:08:04.829 "data_offset": 0, 00:08:04.829 "data_size": 0 00:08:04.829 }, 00:08:04.829 { 00:08:04.829 "name": "BaseBdev2", 00:08:04.829 "uuid": "16bea7a2-fb99-4888-86a9-2c3f6258e74c", 00:08:04.829 "is_configured": true, 00:08:04.829 "data_offset": 0, 00:08:04.829 "data_size": 65536 00:08:04.829 }, 00:08:04.829 { 00:08:04.829 "name": "BaseBdev3", 00:08:04.829 "uuid": "6ef7e69b-081e-485c-bdf3-3c91cc7beb67", 00:08:04.829 "is_configured": true, 00:08:04.829 "data_offset": 0, 00:08:04.829 "data_size": 65536 00:08:04.829 } 00:08:04.829 ] 00:08:04.829 }' 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.829 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.399 [2024-11-26 12:51:22.841270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.399 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.399 "name": "Existed_Raid", 00:08:05.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.399 "strip_size_kb": 64, 00:08:05.399 "state": "configuring", 00:08:05.399 "raid_level": "raid0", 00:08:05.399 "superblock": false, 00:08:05.399 "num_base_bdevs": 3, 00:08:05.399 "num_base_bdevs_discovered": 1, 00:08:05.399 "num_base_bdevs_operational": 3, 00:08:05.399 "base_bdevs_list": [ 00:08:05.399 { 00:08:05.400 "name": "BaseBdev1", 00:08:05.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.400 "is_configured": false, 00:08:05.400 "data_offset": 0, 00:08:05.400 "data_size": 0 00:08:05.400 }, 00:08:05.400 { 00:08:05.400 "name": null, 00:08:05.400 "uuid": "16bea7a2-fb99-4888-86a9-2c3f6258e74c", 00:08:05.400 "is_configured": false, 00:08:05.400 "data_offset": 0, 00:08:05.400 "data_size": 65536 00:08:05.400 }, 00:08:05.400 { 00:08:05.400 "name": "BaseBdev3", 00:08:05.400 "uuid": "6ef7e69b-081e-485c-bdf3-3c91cc7beb67", 00:08:05.400 "is_configured": true, 00:08:05.400 "data_offset": 0, 00:08:05.400 "data_size": 65536 00:08:05.400 } 00:08:05.400 ] 00:08:05.400 }' 00:08:05.400 12:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.400 12:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.660 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:05.660 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.660 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:05.660 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.660 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 BaseBdev1 00:08:05.660 [2024-11-26 12:51:23.335356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.660 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.921 [ 00:08:05.921 { 00:08:05.921 "name": "BaseBdev1", 00:08:05.921 "aliases": [ 00:08:05.921 "42b80c48-c581-4b56-a938-77362a4e9c19" 00:08:05.921 ], 00:08:05.921 "product_name": "Malloc disk", 00:08:05.921 "block_size": 512, 00:08:05.921 "num_blocks": 65536, 00:08:05.921 "uuid": "42b80c48-c581-4b56-a938-77362a4e9c19", 00:08:05.921 "assigned_rate_limits": { 00:08:05.921 "rw_ios_per_sec": 0, 00:08:05.921 "rw_mbytes_per_sec": 0, 00:08:05.921 "r_mbytes_per_sec": 0, 00:08:05.921 "w_mbytes_per_sec": 0 00:08:05.921 }, 00:08:05.921 "claimed": true, 00:08:05.921 "claim_type": "exclusive_write", 00:08:05.921 "zoned": false, 00:08:05.921 "supported_io_types": { 00:08:05.921 "read": true, 00:08:05.921 "write": true, 00:08:05.921 "unmap": true, 00:08:05.921 "flush": true, 00:08:05.921 "reset": true, 00:08:05.921 "nvme_admin": false, 00:08:05.921 "nvme_io": false, 00:08:05.921 "nvme_io_md": false, 00:08:05.921 "write_zeroes": true, 00:08:05.921 "zcopy": true, 00:08:05.921 "get_zone_info": false, 00:08:05.921 "zone_management": false, 00:08:05.921 "zone_append": false, 00:08:05.921 "compare": false, 00:08:05.921 "compare_and_write": false, 00:08:05.921 "abort": true, 00:08:05.921 "seek_hole": false, 00:08:05.921 "seek_data": false, 00:08:05.921 "copy": true, 00:08:05.921 "nvme_iov_md": false 00:08:05.921 }, 00:08:05.921 "memory_domains": [ 00:08:05.921 { 00:08:05.921 "dma_device_id": "system", 00:08:05.921 "dma_device_type": 1 00:08:05.921 }, 00:08:05.921 { 00:08:05.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.921 "dma_device_type": 2 00:08:05.921 } 00:08:05.921 ], 00:08:05.921 "driver_specific": {} 00:08:05.921 } 00:08:05.921 ] 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.921 "name": "Existed_Raid", 00:08:05.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.921 "strip_size_kb": 64, 00:08:05.921 "state": "configuring", 00:08:05.921 "raid_level": "raid0", 00:08:05.921 "superblock": false, 00:08:05.921 "num_base_bdevs": 3, 00:08:05.921 "num_base_bdevs_discovered": 2, 00:08:05.921 "num_base_bdevs_operational": 3, 00:08:05.921 "base_bdevs_list": [ 00:08:05.921 { 00:08:05.921 "name": "BaseBdev1", 00:08:05.921 "uuid": "42b80c48-c581-4b56-a938-77362a4e9c19", 00:08:05.921 "is_configured": true, 00:08:05.921 "data_offset": 0, 00:08:05.921 "data_size": 65536 00:08:05.921 }, 00:08:05.921 { 00:08:05.921 "name": null, 00:08:05.921 "uuid": "16bea7a2-fb99-4888-86a9-2c3f6258e74c", 00:08:05.921 "is_configured": false, 00:08:05.921 "data_offset": 0, 00:08:05.921 "data_size": 65536 00:08:05.921 }, 00:08:05.921 { 00:08:05.921 "name": "BaseBdev3", 00:08:05.921 "uuid": "6ef7e69b-081e-485c-bdf3-3c91cc7beb67", 00:08:05.921 "is_configured": true, 00:08:05.921 "data_offset": 0, 00:08:05.921 "data_size": 65536 00:08:05.921 } 00:08:05.921 ] 00:08:05.921 }' 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.921 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.181 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.181 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.181 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.181 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:06.181 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.441 [2024-11-26 12:51:23.866737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.441 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.441 "name": "Existed_Raid", 00:08:06.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.441 "strip_size_kb": 64, 00:08:06.441 "state": "configuring", 00:08:06.441 "raid_level": "raid0", 00:08:06.441 "superblock": false, 00:08:06.441 "num_base_bdevs": 3, 00:08:06.441 "num_base_bdevs_discovered": 1, 00:08:06.441 "num_base_bdevs_operational": 3, 00:08:06.441 "base_bdevs_list": [ 00:08:06.441 { 00:08:06.441 "name": "BaseBdev1", 00:08:06.441 "uuid": "42b80c48-c581-4b56-a938-77362a4e9c19", 00:08:06.441 "is_configured": true, 00:08:06.441 "data_offset": 0, 00:08:06.441 "data_size": 65536 00:08:06.441 }, 00:08:06.441 { 00:08:06.441 "name": null, 00:08:06.441 "uuid": "16bea7a2-fb99-4888-86a9-2c3f6258e74c", 00:08:06.441 "is_configured": false, 00:08:06.441 "data_offset": 0, 00:08:06.441 "data_size": 65536 00:08:06.441 }, 00:08:06.442 { 00:08:06.442 "name": null, 00:08:06.442 "uuid": "6ef7e69b-081e-485c-bdf3-3c91cc7beb67", 00:08:06.442 "is_configured": false, 00:08:06.442 "data_offset": 0, 00:08:06.442 "data_size": 65536 00:08:06.442 } 00:08:06.442 ] 00:08:06.442 }' 00:08:06.442 12:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.442 12:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.702 [2024-11-26 12:51:24.341958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.702 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.961 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.961 "name": "Existed_Raid", 00:08:06.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.961 "strip_size_kb": 64, 00:08:06.961 "state": "configuring", 00:08:06.961 "raid_level": "raid0", 00:08:06.961 "superblock": false, 00:08:06.961 "num_base_bdevs": 3, 00:08:06.961 "num_base_bdevs_discovered": 2, 00:08:06.961 "num_base_bdevs_operational": 3, 00:08:06.961 "base_bdevs_list": [ 00:08:06.961 { 00:08:06.961 "name": "BaseBdev1", 00:08:06.961 "uuid": "42b80c48-c581-4b56-a938-77362a4e9c19", 00:08:06.961 "is_configured": true, 00:08:06.961 "data_offset": 0, 00:08:06.961 "data_size": 65536 00:08:06.961 }, 00:08:06.961 { 00:08:06.961 "name": null, 00:08:06.961 "uuid": "16bea7a2-fb99-4888-86a9-2c3f6258e74c", 00:08:06.961 "is_configured": false, 00:08:06.961 "data_offset": 0, 00:08:06.961 "data_size": 65536 00:08:06.961 }, 00:08:06.961 { 00:08:06.961 "name": "BaseBdev3", 00:08:06.961 "uuid": "6ef7e69b-081e-485c-bdf3-3c91cc7beb67", 00:08:06.961 "is_configured": true, 00:08:06.961 "data_offset": 0, 00:08:06.961 "data_size": 65536 00:08:06.961 } 00:08:06.961 ] 00:08:06.961 }' 00:08:06.961 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.961 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.221 [2024-11-26 12:51:24.761257] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.221 "name": "Existed_Raid", 00:08:07.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.221 "strip_size_kb": 64, 00:08:07.221 "state": "configuring", 00:08:07.221 "raid_level": "raid0", 00:08:07.221 "superblock": false, 00:08:07.221 "num_base_bdevs": 3, 00:08:07.221 "num_base_bdevs_discovered": 1, 00:08:07.221 "num_base_bdevs_operational": 3, 00:08:07.221 "base_bdevs_list": [ 00:08:07.221 { 00:08:07.221 "name": null, 00:08:07.221 "uuid": "42b80c48-c581-4b56-a938-77362a4e9c19", 00:08:07.221 "is_configured": false, 00:08:07.221 "data_offset": 0, 00:08:07.221 "data_size": 65536 00:08:07.221 }, 00:08:07.221 { 00:08:07.221 "name": null, 00:08:07.221 "uuid": "16bea7a2-fb99-4888-86a9-2c3f6258e74c", 00:08:07.221 "is_configured": false, 00:08:07.221 "data_offset": 0, 00:08:07.221 "data_size": 65536 00:08:07.221 }, 00:08:07.221 { 00:08:07.221 "name": "BaseBdev3", 00:08:07.221 "uuid": "6ef7e69b-081e-485c-bdf3-3c91cc7beb67", 00:08:07.221 "is_configured": true, 00:08:07.221 "data_offset": 0, 00:08:07.221 "data_size": 65536 00:08:07.221 } 00:08:07.221 ] 00:08:07.221 }' 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.221 12:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.791 [2024-11-26 12:51:25.262968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.791 "name": "Existed_Raid", 00:08:07.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.791 "strip_size_kb": 64, 00:08:07.791 "state": "configuring", 00:08:07.791 "raid_level": "raid0", 00:08:07.791 "superblock": false, 00:08:07.791 "num_base_bdevs": 3, 00:08:07.791 "num_base_bdevs_discovered": 2, 00:08:07.791 "num_base_bdevs_operational": 3, 00:08:07.791 "base_bdevs_list": [ 00:08:07.791 { 00:08:07.791 "name": null, 00:08:07.791 "uuid": "42b80c48-c581-4b56-a938-77362a4e9c19", 00:08:07.791 "is_configured": false, 00:08:07.791 "data_offset": 0, 00:08:07.791 "data_size": 65536 00:08:07.791 }, 00:08:07.791 { 00:08:07.791 "name": "BaseBdev2", 00:08:07.791 "uuid": "16bea7a2-fb99-4888-86a9-2c3f6258e74c", 00:08:07.791 "is_configured": true, 00:08:07.791 "data_offset": 0, 00:08:07.791 "data_size": 65536 00:08:07.791 }, 00:08:07.791 { 00:08:07.791 "name": "BaseBdev3", 00:08:07.791 "uuid": "6ef7e69b-081e-485c-bdf3-3c91cc7beb67", 00:08:07.791 "is_configured": true, 00:08:07.791 "data_offset": 0, 00:08:07.791 "data_size": 65536 00:08:07.791 } 00:08:07.791 ] 00:08:07.791 }' 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.791 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.051 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.051 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.051 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.051 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:08.051 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.051 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:08.051 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.051 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:08.051 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.051 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 42b80c48-c581-4b56-a938-77362a4e9c19 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.312 [2024-11-26 12:51:25.769045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:08.312 [2024-11-26 12:51:25.769154] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:08.312 [2024-11-26 12:51:25.769181] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:08.312 [2024-11-26 12:51:25.769492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:08.312 [2024-11-26 12:51:25.769649] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:08.312 [2024-11-26 12:51:25.769689] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:08.312 [2024-11-26 12:51:25.769897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.312 NewBaseBdev 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.312 [ 00:08:08.312 { 00:08:08.312 "name": "NewBaseBdev", 00:08:08.312 "aliases": [ 00:08:08.312 "42b80c48-c581-4b56-a938-77362a4e9c19" 00:08:08.312 ], 00:08:08.312 "product_name": "Malloc disk", 00:08:08.312 "block_size": 512, 00:08:08.312 "num_blocks": 65536, 00:08:08.312 "uuid": "42b80c48-c581-4b56-a938-77362a4e9c19", 00:08:08.312 "assigned_rate_limits": { 00:08:08.312 "rw_ios_per_sec": 0, 00:08:08.312 "rw_mbytes_per_sec": 0, 00:08:08.312 "r_mbytes_per_sec": 0, 00:08:08.312 "w_mbytes_per_sec": 0 00:08:08.312 }, 00:08:08.312 "claimed": true, 00:08:08.312 "claim_type": "exclusive_write", 00:08:08.312 "zoned": false, 00:08:08.312 "supported_io_types": { 00:08:08.312 "read": true, 00:08:08.312 "write": true, 00:08:08.312 "unmap": true, 00:08:08.312 "flush": true, 00:08:08.312 "reset": true, 00:08:08.312 "nvme_admin": false, 00:08:08.312 "nvme_io": false, 00:08:08.312 "nvme_io_md": false, 00:08:08.312 "write_zeroes": true, 00:08:08.312 "zcopy": true, 00:08:08.312 "get_zone_info": false, 00:08:08.312 "zone_management": false, 00:08:08.312 "zone_append": false, 00:08:08.312 "compare": false, 00:08:08.312 "compare_and_write": false, 00:08:08.312 "abort": true, 00:08:08.312 "seek_hole": false, 00:08:08.312 "seek_data": false, 00:08:08.312 "copy": true, 00:08:08.312 "nvme_iov_md": false 00:08:08.312 }, 00:08:08.312 "memory_domains": [ 00:08:08.312 { 00:08:08.312 "dma_device_id": "system", 00:08:08.312 "dma_device_type": 1 00:08:08.312 }, 00:08:08.312 { 00:08:08.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.312 "dma_device_type": 2 00:08:08.312 } 00:08:08.312 ], 00:08:08.312 "driver_specific": {} 00:08:08.312 } 00:08:08.312 ] 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.312 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.313 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.313 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.313 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.313 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.313 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.313 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.313 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.313 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.313 "name": "Existed_Raid", 00:08:08.313 "uuid": "a7849a0d-074e-438f-9140-3cf9bc887810", 00:08:08.313 "strip_size_kb": 64, 00:08:08.313 "state": "online", 00:08:08.313 "raid_level": "raid0", 00:08:08.313 "superblock": false, 00:08:08.313 "num_base_bdevs": 3, 00:08:08.313 "num_base_bdevs_discovered": 3, 00:08:08.313 "num_base_bdevs_operational": 3, 00:08:08.313 "base_bdevs_list": [ 00:08:08.313 { 00:08:08.313 "name": "NewBaseBdev", 00:08:08.313 "uuid": "42b80c48-c581-4b56-a938-77362a4e9c19", 00:08:08.313 "is_configured": true, 00:08:08.313 "data_offset": 0, 00:08:08.313 "data_size": 65536 00:08:08.313 }, 00:08:08.313 { 00:08:08.313 "name": "BaseBdev2", 00:08:08.313 "uuid": "16bea7a2-fb99-4888-86a9-2c3f6258e74c", 00:08:08.313 "is_configured": true, 00:08:08.313 "data_offset": 0, 00:08:08.313 "data_size": 65536 00:08:08.313 }, 00:08:08.313 { 00:08:08.313 "name": "BaseBdev3", 00:08:08.313 "uuid": "6ef7e69b-081e-485c-bdf3-3c91cc7beb67", 00:08:08.313 "is_configured": true, 00:08:08.313 "data_offset": 0, 00:08:08.313 "data_size": 65536 00:08:08.313 } 00:08:08.313 ] 00:08:08.313 }' 00:08:08.313 12:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.313 12:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.572 [2024-11-26 12:51:26.228543] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.572 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.830 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.830 "name": "Existed_Raid", 00:08:08.830 "aliases": [ 00:08:08.831 "a7849a0d-074e-438f-9140-3cf9bc887810" 00:08:08.831 ], 00:08:08.831 "product_name": "Raid Volume", 00:08:08.831 "block_size": 512, 00:08:08.831 "num_blocks": 196608, 00:08:08.831 "uuid": "a7849a0d-074e-438f-9140-3cf9bc887810", 00:08:08.831 "assigned_rate_limits": { 00:08:08.831 "rw_ios_per_sec": 0, 00:08:08.831 "rw_mbytes_per_sec": 0, 00:08:08.831 "r_mbytes_per_sec": 0, 00:08:08.831 "w_mbytes_per_sec": 0 00:08:08.831 }, 00:08:08.831 "claimed": false, 00:08:08.831 "zoned": false, 00:08:08.831 "supported_io_types": { 00:08:08.831 "read": true, 00:08:08.831 "write": true, 00:08:08.831 "unmap": true, 00:08:08.831 "flush": true, 00:08:08.831 "reset": true, 00:08:08.831 "nvme_admin": false, 00:08:08.831 "nvme_io": false, 00:08:08.831 "nvme_io_md": false, 00:08:08.831 "write_zeroes": true, 00:08:08.831 "zcopy": false, 00:08:08.831 "get_zone_info": false, 00:08:08.831 "zone_management": false, 00:08:08.831 "zone_append": false, 00:08:08.831 "compare": false, 00:08:08.831 "compare_and_write": false, 00:08:08.831 "abort": false, 00:08:08.831 "seek_hole": false, 00:08:08.831 "seek_data": false, 00:08:08.831 "copy": false, 00:08:08.831 "nvme_iov_md": false 00:08:08.831 }, 00:08:08.831 "memory_domains": [ 00:08:08.831 { 00:08:08.831 "dma_device_id": "system", 00:08:08.831 "dma_device_type": 1 00:08:08.831 }, 00:08:08.831 { 00:08:08.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.831 "dma_device_type": 2 00:08:08.831 }, 00:08:08.831 { 00:08:08.831 "dma_device_id": "system", 00:08:08.831 "dma_device_type": 1 00:08:08.831 }, 00:08:08.831 { 00:08:08.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.831 "dma_device_type": 2 00:08:08.831 }, 00:08:08.831 { 00:08:08.831 "dma_device_id": "system", 00:08:08.831 "dma_device_type": 1 00:08:08.831 }, 00:08:08.831 { 00:08:08.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.831 "dma_device_type": 2 00:08:08.831 } 00:08:08.831 ], 00:08:08.831 "driver_specific": { 00:08:08.831 "raid": { 00:08:08.831 "uuid": "a7849a0d-074e-438f-9140-3cf9bc887810", 00:08:08.831 "strip_size_kb": 64, 00:08:08.831 "state": "online", 00:08:08.831 "raid_level": "raid0", 00:08:08.831 "superblock": false, 00:08:08.831 "num_base_bdevs": 3, 00:08:08.831 "num_base_bdevs_discovered": 3, 00:08:08.831 "num_base_bdevs_operational": 3, 00:08:08.831 "base_bdevs_list": [ 00:08:08.831 { 00:08:08.831 "name": "NewBaseBdev", 00:08:08.831 "uuid": "42b80c48-c581-4b56-a938-77362a4e9c19", 00:08:08.831 "is_configured": true, 00:08:08.831 "data_offset": 0, 00:08:08.831 "data_size": 65536 00:08:08.831 }, 00:08:08.831 { 00:08:08.831 "name": "BaseBdev2", 00:08:08.831 "uuid": "16bea7a2-fb99-4888-86a9-2c3f6258e74c", 00:08:08.831 "is_configured": true, 00:08:08.831 "data_offset": 0, 00:08:08.831 "data_size": 65536 00:08:08.831 }, 00:08:08.831 { 00:08:08.831 "name": "BaseBdev3", 00:08:08.831 "uuid": "6ef7e69b-081e-485c-bdf3-3c91cc7beb67", 00:08:08.831 "is_configured": true, 00:08:08.831 "data_offset": 0, 00:08:08.831 "data_size": 65536 00:08:08.831 } 00:08:08.831 ] 00:08:08.831 } 00:08:08.831 } 00:08:08.831 }' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:08.831 BaseBdev2 00:08:08.831 BaseBdev3' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.831 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.832 [2024-11-26 12:51:26.487840] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.832 [2024-11-26 12:51:26.487909] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.832 [2024-11-26 12:51:26.487987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.832 [2024-11-26 12:51:26.488049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.832 [2024-11-26 12:51:26.488095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:08.832 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.832 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75266 00:08:08.832 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75266 ']' 00:08:08.832 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75266 00:08:08.832 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:08.832 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.832 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75266 00:08:09.097 killing process with pid 75266 00:08:09.097 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.097 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.097 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75266' 00:08:09.097 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75266 00:08:09.097 [2024-11-26 12:51:26.527248] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.097 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75266 00:08:09.097 [2024-11-26 12:51:26.557875] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.367 ************************************ 00:08:09.367 END TEST raid_state_function_test 00:08:09.367 ************************************ 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:09.367 00:08:09.367 real 0m8.473s 00:08:09.367 user 0m14.363s 00:08:09.367 sys 0m1.707s 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.367 12:51:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:09.367 12:51:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:09.367 12:51:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.367 12:51:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.367 ************************************ 00:08:09.367 START TEST raid_state_function_test_sb 00:08:09.367 ************************************ 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75871 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:09.367 Process raid pid: 75871 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75871' 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75871 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75871 ']' 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.367 12:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.367 [2024-11-26 12:51:26.969513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:09.367 [2024-11-26 12:51:26.969685] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.627 [2024-11-26 12:51:27.127600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.627 [2024-11-26 12:51:27.173384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.627 [2024-11-26 12:51:27.216507] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.627 [2024-11-26 12:51:27.216623] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.195 [2024-11-26 12:51:27.814451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.195 [2024-11-26 12:51:27.814542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.195 [2024-11-26 12:51:27.814602] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.195 [2024-11-26 12:51:27.814638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.195 [2024-11-26 12:51:27.814678] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.195 [2024-11-26 12:51:27.814727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.195 "name": "Existed_Raid", 00:08:10.195 "uuid": "65e289a3-8c4d-4d70-aece-e06289fa8fa7", 00:08:10.195 "strip_size_kb": 64, 00:08:10.195 "state": "configuring", 00:08:10.195 "raid_level": "raid0", 00:08:10.195 "superblock": true, 00:08:10.195 "num_base_bdevs": 3, 00:08:10.195 "num_base_bdevs_discovered": 0, 00:08:10.195 "num_base_bdevs_operational": 3, 00:08:10.195 "base_bdevs_list": [ 00:08:10.195 { 00:08:10.195 "name": "BaseBdev1", 00:08:10.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.195 "is_configured": false, 00:08:10.195 "data_offset": 0, 00:08:10.195 "data_size": 0 00:08:10.195 }, 00:08:10.195 { 00:08:10.195 "name": "BaseBdev2", 00:08:10.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.195 "is_configured": false, 00:08:10.195 "data_offset": 0, 00:08:10.195 "data_size": 0 00:08:10.195 }, 00:08:10.195 { 00:08:10.195 "name": "BaseBdev3", 00:08:10.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.195 "is_configured": false, 00:08:10.195 "data_offset": 0, 00:08:10.195 "data_size": 0 00:08:10.195 } 00:08:10.195 ] 00:08:10.195 }' 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.195 12:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.765 [2024-11-26 12:51:28.233687] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.765 [2024-11-26 12:51:28.233779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.765 [2024-11-26 12:51:28.241712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.765 [2024-11-26 12:51:28.241754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.765 [2024-11-26 12:51:28.241765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.765 [2024-11-26 12:51:28.241778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.765 [2024-11-26 12:51:28.241785] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.765 [2024-11-26 12:51:28.241797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.765 [2024-11-26 12:51:28.258543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.765 BaseBdev1 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.765 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.765 [ 00:08:10.765 { 00:08:10.765 "name": "BaseBdev1", 00:08:10.765 "aliases": [ 00:08:10.765 "67d5d83e-ffb5-4838-bb12-55aafa7c3361" 00:08:10.765 ], 00:08:10.765 "product_name": "Malloc disk", 00:08:10.765 "block_size": 512, 00:08:10.765 "num_blocks": 65536, 00:08:10.765 "uuid": "67d5d83e-ffb5-4838-bb12-55aafa7c3361", 00:08:10.765 "assigned_rate_limits": { 00:08:10.765 "rw_ios_per_sec": 0, 00:08:10.765 "rw_mbytes_per_sec": 0, 00:08:10.765 "r_mbytes_per_sec": 0, 00:08:10.766 "w_mbytes_per_sec": 0 00:08:10.766 }, 00:08:10.766 "claimed": true, 00:08:10.766 "claim_type": "exclusive_write", 00:08:10.766 "zoned": false, 00:08:10.766 "supported_io_types": { 00:08:10.766 "read": true, 00:08:10.766 "write": true, 00:08:10.766 "unmap": true, 00:08:10.766 "flush": true, 00:08:10.766 "reset": true, 00:08:10.766 "nvme_admin": false, 00:08:10.766 "nvme_io": false, 00:08:10.766 "nvme_io_md": false, 00:08:10.766 "write_zeroes": true, 00:08:10.766 "zcopy": true, 00:08:10.766 "get_zone_info": false, 00:08:10.766 "zone_management": false, 00:08:10.766 "zone_append": false, 00:08:10.766 "compare": false, 00:08:10.766 "compare_and_write": false, 00:08:10.766 "abort": true, 00:08:10.766 "seek_hole": false, 00:08:10.766 "seek_data": false, 00:08:10.766 "copy": true, 00:08:10.766 "nvme_iov_md": false 00:08:10.766 }, 00:08:10.766 "memory_domains": [ 00:08:10.766 { 00:08:10.766 "dma_device_id": "system", 00:08:10.766 "dma_device_type": 1 00:08:10.766 }, 00:08:10.766 { 00:08:10.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.766 "dma_device_type": 2 00:08:10.766 } 00:08:10.766 ], 00:08:10.766 "driver_specific": {} 00:08:10.766 } 00:08:10.766 ] 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.766 "name": "Existed_Raid", 00:08:10.766 "uuid": "a0b73e37-1be4-4f0a-af81-ad2e7c2cb855", 00:08:10.766 "strip_size_kb": 64, 00:08:10.766 "state": "configuring", 00:08:10.766 "raid_level": "raid0", 00:08:10.766 "superblock": true, 00:08:10.766 "num_base_bdevs": 3, 00:08:10.766 "num_base_bdevs_discovered": 1, 00:08:10.766 "num_base_bdevs_operational": 3, 00:08:10.766 "base_bdevs_list": [ 00:08:10.766 { 00:08:10.766 "name": "BaseBdev1", 00:08:10.766 "uuid": "67d5d83e-ffb5-4838-bb12-55aafa7c3361", 00:08:10.766 "is_configured": true, 00:08:10.766 "data_offset": 2048, 00:08:10.766 "data_size": 63488 00:08:10.766 }, 00:08:10.766 { 00:08:10.766 "name": "BaseBdev2", 00:08:10.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.766 "is_configured": false, 00:08:10.766 "data_offset": 0, 00:08:10.766 "data_size": 0 00:08:10.766 }, 00:08:10.766 { 00:08:10.766 "name": "BaseBdev3", 00:08:10.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.766 "is_configured": false, 00:08:10.766 "data_offset": 0, 00:08:10.766 "data_size": 0 00:08:10.766 } 00:08:10.766 ] 00:08:10.766 }' 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.766 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.336 [2024-11-26 12:51:28.713805] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.336 [2024-11-26 12:51:28.713857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.336 [2024-11-26 12:51:28.725830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.336 [2024-11-26 12:51:28.727728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.336 [2024-11-26 12:51:28.727772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.336 [2024-11-26 12:51:28.727784] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.336 [2024-11-26 12:51:28.727798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.336 "name": "Existed_Raid", 00:08:11.336 "uuid": "2b734705-b3c2-436e-b899-760109040f5a", 00:08:11.336 "strip_size_kb": 64, 00:08:11.336 "state": "configuring", 00:08:11.336 "raid_level": "raid0", 00:08:11.336 "superblock": true, 00:08:11.336 "num_base_bdevs": 3, 00:08:11.336 "num_base_bdevs_discovered": 1, 00:08:11.336 "num_base_bdevs_operational": 3, 00:08:11.336 "base_bdevs_list": [ 00:08:11.336 { 00:08:11.336 "name": "BaseBdev1", 00:08:11.336 "uuid": "67d5d83e-ffb5-4838-bb12-55aafa7c3361", 00:08:11.336 "is_configured": true, 00:08:11.336 "data_offset": 2048, 00:08:11.336 "data_size": 63488 00:08:11.336 }, 00:08:11.336 { 00:08:11.336 "name": "BaseBdev2", 00:08:11.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.336 "is_configured": false, 00:08:11.336 "data_offset": 0, 00:08:11.336 "data_size": 0 00:08:11.336 }, 00:08:11.336 { 00:08:11.336 "name": "BaseBdev3", 00:08:11.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.336 "is_configured": false, 00:08:11.336 "data_offset": 0, 00:08:11.336 "data_size": 0 00:08:11.336 } 00:08:11.336 ] 00:08:11.336 }' 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.336 12:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.596 [2024-11-26 12:51:29.187776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.596 BaseBdev2 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.596 [ 00:08:11.596 { 00:08:11.596 "name": "BaseBdev2", 00:08:11.596 "aliases": [ 00:08:11.596 "42b9c63b-95cc-45c2-ad0a-fe9447883787" 00:08:11.596 ], 00:08:11.596 "product_name": "Malloc disk", 00:08:11.596 "block_size": 512, 00:08:11.596 "num_blocks": 65536, 00:08:11.596 "uuid": "42b9c63b-95cc-45c2-ad0a-fe9447883787", 00:08:11.596 "assigned_rate_limits": { 00:08:11.596 "rw_ios_per_sec": 0, 00:08:11.596 "rw_mbytes_per_sec": 0, 00:08:11.596 "r_mbytes_per_sec": 0, 00:08:11.596 "w_mbytes_per_sec": 0 00:08:11.596 }, 00:08:11.596 "claimed": true, 00:08:11.596 "claim_type": "exclusive_write", 00:08:11.596 "zoned": false, 00:08:11.596 "supported_io_types": { 00:08:11.596 "read": true, 00:08:11.596 "write": true, 00:08:11.596 "unmap": true, 00:08:11.596 "flush": true, 00:08:11.596 "reset": true, 00:08:11.596 "nvme_admin": false, 00:08:11.596 "nvme_io": false, 00:08:11.596 "nvme_io_md": false, 00:08:11.596 "write_zeroes": true, 00:08:11.596 "zcopy": true, 00:08:11.596 "get_zone_info": false, 00:08:11.596 "zone_management": false, 00:08:11.596 "zone_append": false, 00:08:11.596 "compare": false, 00:08:11.596 "compare_and_write": false, 00:08:11.596 "abort": true, 00:08:11.596 "seek_hole": false, 00:08:11.596 "seek_data": false, 00:08:11.596 "copy": true, 00:08:11.596 "nvme_iov_md": false 00:08:11.596 }, 00:08:11.596 "memory_domains": [ 00:08:11.596 { 00:08:11.596 "dma_device_id": "system", 00:08:11.596 "dma_device_type": 1 00:08:11.596 }, 00:08:11.596 { 00:08:11.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.596 "dma_device_type": 2 00:08:11.596 } 00:08:11.596 ], 00:08:11.596 "driver_specific": {} 00:08:11.596 } 00:08:11.596 ] 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.596 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.596 "name": "Existed_Raid", 00:08:11.596 "uuid": "2b734705-b3c2-436e-b899-760109040f5a", 00:08:11.596 "strip_size_kb": 64, 00:08:11.596 "state": "configuring", 00:08:11.596 "raid_level": "raid0", 00:08:11.596 "superblock": true, 00:08:11.596 "num_base_bdevs": 3, 00:08:11.596 "num_base_bdevs_discovered": 2, 00:08:11.596 "num_base_bdevs_operational": 3, 00:08:11.596 "base_bdevs_list": [ 00:08:11.596 { 00:08:11.596 "name": "BaseBdev1", 00:08:11.596 "uuid": "67d5d83e-ffb5-4838-bb12-55aafa7c3361", 00:08:11.596 "is_configured": true, 00:08:11.596 "data_offset": 2048, 00:08:11.596 "data_size": 63488 00:08:11.596 }, 00:08:11.596 { 00:08:11.596 "name": "BaseBdev2", 00:08:11.596 "uuid": "42b9c63b-95cc-45c2-ad0a-fe9447883787", 00:08:11.596 "is_configured": true, 00:08:11.596 "data_offset": 2048, 00:08:11.597 "data_size": 63488 00:08:11.597 }, 00:08:11.597 { 00:08:11.597 "name": "BaseBdev3", 00:08:11.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.597 "is_configured": false, 00:08:11.597 "data_offset": 0, 00:08:11.597 "data_size": 0 00:08:11.597 } 00:08:11.597 ] 00:08:11.597 }' 00:08:11.597 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.597 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.165 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:12.165 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.165 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.165 [2024-11-26 12:51:29.642059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:12.165 [2024-11-26 12:51:29.642381] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:12.165 [2024-11-26 12:51:29.642446] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:12.165 [2024-11-26 12:51:29.642787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:12.165 BaseBdev3 00:08:12.165 [2024-11-26 12:51:29.642990] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:12.165 [2024-11-26 12:51:29.643038] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:12.165 [2024-11-26 12:51:29.643268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.166 [ 00:08:12.166 { 00:08:12.166 "name": "BaseBdev3", 00:08:12.166 "aliases": [ 00:08:12.166 "d6e417f7-0286-48a6-943d-b052dba58315" 00:08:12.166 ], 00:08:12.166 "product_name": "Malloc disk", 00:08:12.166 "block_size": 512, 00:08:12.166 "num_blocks": 65536, 00:08:12.166 "uuid": "d6e417f7-0286-48a6-943d-b052dba58315", 00:08:12.166 "assigned_rate_limits": { 00:08:12.166 "rw_ios_per_sec": 0, 00:08:12.166 "rw_mbytes_per_sec": 0, 00:08:12.166 "r_mbytes_per_sec": 0, 00:08:12.166 "w_mbytes_per_sec": 0 00:08:12.166 }, 00:08:12.166 "claimed": true, 00:08:12.166 "claim_type": "exclusive_write", 00:08:12.166 "zoned": false, 00:08:12.166 "supported_io_types": { 00:08:12.166 "read": true, 00:08:12.166 "write": true, 00:08:12.166 "unmap": true, 00:08:12.166 "flush": true, 00:08:12.166 "reset": true, 00:08:12.166 "nvme_admin": false, 00:08:12.166 "nvme_io": false, 00:08:12.166 "nvme_io_md": false, 00:08:12.166 "write_zeroes": true, 00:08:12.166 "zcopy": true, 00:08:12.166 "get_zone_info": false, 00:08:12.166 "zone_management": false, 00:08:12.166 "zone_append": false, 00:08:12.166 "compare": false, 00:08:12.166 "compare_and_write": false, 00:08:12.166 "abort": true, 00:08:12.166 "seek_hole": false, 00:08:12.166 "seek_data": false, 00:08:12.166 "copy": true, 00:08:12.166 "nvme_iov_md": false 00:08:12.166 }, 00:08:12.166 "memory_domains": [ 00:08:12.166 { 00:08:12.166 "dma_device_id": "system", 00:08:12.166 "dma_device_type": 1 00:08:12.166 }, 00:08:12.166 { 00:08:12.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.166 "dma_device_type": 2 00:08:12.166 } 00:08:12.166 ], 00:08:12.166 "driver_specific": {} 00:08:12.166 } 00:08:12.166 ] 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.166 "name": "Existed_Raid", 00:08:12.166 "uuid": "2b734705-b3c2-436e-b899-760109040f5a", 00:08:12.166 "strip_size_kb": 64, 00:08:12.166 "state": "online", 00:08:12.166 "raid_level": "raid0", 00:08:12.166 "superblock": true, 00:08:12.166 "num_base_bdevs": 3, 00:08:12.166 "num_base_bdevs_discovered": 3, 00:08:12.166 "num_base_bdevs_operational": 3, 00:08:12.166 "base_bdevs_list": [ 00:08:12.166 { 00:08:12.166 "name": "BaseBdev1", 00:08:12.166 "uuid": "67d5d83e-ffb5-4838-bb12-55aafa7c3361", 00:08:12.166 "is_configured": true, 00:08:12.166 "data_offset": 2048, 00:08:12.166 "data_size": 63488 00:08:12.166 }, 00:08:12.166 { 00:08:12.166 "name": "BaseBdev2", 00:08:12.166 "uuid": "42b9c63b-95cc-45c2-ad0a-fe9447883787", 00:08:12.166 "is_configured": true, 00:08:12.166 "data_offset": 2048, 00:08:12.166 "data_size": 63488 00:08:12.166 }, 00:08:12.166 { 00:08:12.166 "name": "BaseBdev3", 00:08:12.166 "uuid": "d6e417f7-0286-48a6-943d-b052dba58315", 00:08:12.166 "is_configured": true, 00:08:12.166 "data_offset": 2048, 00:08:12.166 "data_size": 63488 00:08:12.166 } 00:08:12.166 ] 00:08:12.166 }' 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.166 12:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.426 [2024-11-26 12:51:30.029677] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.426 "name": "Existed_Raid", 00:08:12.426 "aliases": [ 00:08:12.426 "2b734705-b3c2-436e-b899-760109040f5a" 00:08:12.426 ], 00:08:12.426 "product_name": "Raid Volume", 00:08:12.426 "block_size": 512, 00:08:12.426 "num_blocks": 190464, 00:08:12.426 "uuid": "2b734705-b3c2-436e-b899-760109040f5a", 00:08:12.426 "assigned_rate_limits": { 00:08:12.426 "rw_ios_per_sec": 0, 00:08:12.426 "rw_mbytes_per_sec": 0, 00:08:12.426 "r_mbytes_per_sec": 0, 00:08:12.426 "w_mbytes_per_sec": 0 00:08:12.426 }, 00:08:12.426 "claimed": false, 00:08:12.426 "zoned": false, 00:08:12.426 "supported_io_types": { 00:08:12.426 "read": true, 00:08:12.426 "write": true, 00:08:12.426 "unmap": true, 00:08:12.426 "flush": true, 00:08:12.426 "reset": true, 00:08:12.426 "nvme_admin": false, 00:08:12.426 "nvme_io": false, 00:08:12.426 "nvme_io_md": false, 00:08:12.426 "write_zeroes": true, 00:08:12.426 "zcopy": false, 00:08:12.426 "get_zone_info": false, 00:08:12.426 "zone_management": false, 00:08:12.426 "zone_append": false, 00:08:12.426 "compare": false, 00:08:12.426 "compare_and_write": false, 00:08:12.426 "abort": false, 00:08:12.426 "seek_hole": false, 00:08:12.426 "seek_data": false, 00:08:12.426 "copy": false, 00:08:12.426 "nvme_iov_md": false 00:08:12.426 }, 00:08:12.426 "memory_domains": [ 00:08:12.426 { 00:08:12.426 "dma_device_id": "system", 00:08:12.426 "dma_device_type": 1 00:08:12.426 }, 00:08:12.426 { 00:08:12.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.426 "dma_device_type": 2 00:08:12.426 }, 00:08:12.426 { 00:08:12.426 "dma_device_id": "system", 00:08:12.426 "dma_device_type": 1 00:08:12.426 }, 00:08:12.426 { 00:08:12.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.426 "dma_device_type": 2 00:08:12.426 }, 00:08:12.426 { 00:08:12.426 "dma_device_id": "system", 00:08:12.426 "dma_device_type": 1 00:08:12.426 }, 00:08:12.426 { 00:08:12.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.426 "dma_device_type": 2 00:08:12.426 } 00:08:12.426 ], 00:08:12.426 "driver_specific": { 00:08:12.426 "raid": { 00:08:12.426 "uuid": "2b734705-b3c2-436e-b899-760109040f5a", 00:08:12.426 "strip_size_kb": 64, 00:08:12.426 "state": "online", 00:08:12.426 "raid_level": "raid0", 00:08:12.426 "superblock": true, 00:08:12.426 "num_base_bdevs": 3, 00:08:12.426 "num_base_bdevs_discovered": 3, 00:08:12.426 "num_base_bdevs_operational": 3, 00:08:12.426 "base_bdevs_list": [ 00:08:12.426 { 00:08:12.426 "name": "BaseBdev1", 00:08:12.426 "uuid": "67d5d83e-ffb5-4838-bb12-55aafa7c3361", 00:08:12.426 "is_configured": true, 00:08:12.426 "data_offset": 2048, 00:08:12.426 "data_size": 63488 00:08:12.426 }, 00:08:12.426 { 00:08:12.426 "name": "BaseBdev2", 00:08:12.426 "uuid": "42b9c63b-95cc-45c2-ad0a-fe9447883787", 00:08:12.426 "is_configured": true, 00:08:12.426 "data_offset": 2048, 00:08:12.426 "data_size": 63488 00:08:12.426 }, 00:08:12.426 { 00:08:12.426 "name": "BaseBdev3", 00:08:12.426 "uuid": "d6e417f7-0286-48a6-943d-b052dba58315", 00:08:12.426 "is_configured": true, 00:08:12.426 "data_offset": 2048, 00:08:12.426 "data_size": 63488 00:08:12.426 } 00:08:12.426 ] 00:08:12.426 } 00:08:12.426 } 00:08:12.426 }' 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:12.426 BaseBdev2 00:08:12.426 BaseBdev3' 00:08:12.426 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.687 [2024-11-26 12:51:30.293042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.687 [2024-11-26 12:51:30.293068] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.687 [2024-11-26 12:51:30.293131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.687 "name": "Existed_Raid", 00:08:12.687 "uuid": "2b734705-b3c2-436e-b899-760109040f5a", 00:08:12.687 "strip_size_kb": 64, 00:08:12.687 "state": "offline", 00:08:12.687 "raid_level": "raid0", 00:08:12.687 "superblock": true, 00:08:12.687 "num_base_bdevs": 3, 00:08:12.687 "num_base_bdevs_discovered": 2, 00:08:12.687 "num_base_bdevs_operational": 2, 00:08:12.687 "base_bdevs_list": [ 00:08:12.687 { 00:08:12.687 "name": null, 00:08:12.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.687 "is_configured": false, 00:08:12.687 "data_offset": 0, 00:08:12.687 "data_size": 63488 00:08:12.687 }, 00:08:12.687 { 00:08:12.687 "name": "BaseBdev2", 00:08:12.687 "uuid": "42b9c63b-95cc-45c2-ad0a-fe9447883787", 00:08:12.687 "is_configured": true, 00:08:12.687 "data_offset": 2048, 00:08:12.687 "data_size": 63488 00:08:12.687 }, 00:08:12.687 { 00:08:12.687 "name": "BaseBdev3", 00:08:12.687 "uuid": "d6e417f7-0286-48a6-943d-b052dba58315", 00:08:12.687 "is_configured": true, 00:08:12.687 "data_offset": 2048, 00:08:12.687 "data_size": 63488 00:08:12.687 } 00:08:12.687 ] 00:08:12.687 }' 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.687 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.257 [2024-11-26 12:51:30.751759] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.257 [2024-11-26 12:51:30.806924] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:13.257 [2024-11-26 12:51:30.806973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.257 BaseBdev2 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:13.257 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.258 [ 00:08:13.258 { 00:08:13.258 "name": "BaseBdev2", 00:08:13.258 "aliases": [ 00:08:13.258 "a0a80153-949c-4d71-8b73-5453aba47691" 00:08:13.258 ], 00:08:13.258 "product_name": "Malloc disk", 00:08:13.258 "block_size": 512, 00:08:13.258 "num_blocks": 65536, 00:08:13.258 "uuid": "a0a80153-949c-4d71-8b73-5453aba47691", 00:08:13.258 "assigned_rate_limits": { 00:08:13.258 "rw_ios_per_sec": 0, 00:08:13.258 "rw_mbytes_per_sec": 0, 00:08:13.258 "r_mbytes_per_sec": 0, 00:08:13.258 "w_mbytes_per_sec": 0 00:08:13.258 }, 00:08:13.258 "claimed": false, 00:08:13.258 "zoned": false, 00:08:13.258 "supported_io_types": { 00:08:13.258 "read": true, 00:08:13.258 "write": true, 00:08:13.258 "unmap": true, 00:08:13.258 "flush": true, 00:08:13.258 "reset": true, 00:08:13.258 "nvme_admin": false, 00:08:13.258 "nvme_io": false, 00:08:13.258 "nvme_io_md": false, 00:08:13.258 "write_zeroes": true, 00:08:13.258 "zcopy": true, 00:08:13.258 "get_zone_info": false, 00:08:13.258 "zone_management": false, 00:08:13.258 "zone_append": false, 00:08:13.258 "compare": false, 00:08:13.258 "compare_and_write": false, 00:08:13.258 "abort": true, 00:08:13.258 "seek_hole": false, 00:08:13.258 "seek_data": false, 00:08:13.258 "copy": true, 00:08:13.258 "nvme_iov_md": false 00:08:13.258 }, 00:08:13.258 "memory_domains": [ 00:08:13.258 { 00:08:13.258 "dma_device_id": "system", 00:08:13.258 "dma_device_type": 1 00:08:13.258 }, 00:08:13.258 { 00:08:13.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.258 "dma_device_type": 2 00:08:13.258 } 00:08:13.258 ], 00:08:13.258 "driver_specific": {} 00:08:13.258 } 00:08:13.258 ] 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.258 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.518 BaseBdev3 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.518 [ 00:08:13.518 { 00:08:13.518 "name": "BaseBdev3", 00:08:13.518 "aliases": [ 00:08:13.518 "37ecef73-840f-493f-a9ca-7e7bba0c3be7" 00:08:13.518 ], 00:08:13.518 "product_name": "Malloc disk", 00:08:13.518 "block_size": 512, 00:08:13.518 "num_blocks": 65536, 00:08:13.518 "uuid": "37ecef73-840f-493f-a9ca-7e7bba0c3be7", 00:08:13.518 "assigned_rate_limits": { 00:08:13.518 "rw_ios_per_sec": 0, 00:08:13.518 "rw_mbytes_per_sec": 0, 00:08:13.518 "r_mbytes_per_sec": 0, 00:08:13.518 "w_mbytes_per_sec": 0 00:08:13.518 }, 00:08:13.518 "claimed": false, 00:08:13.518 "zoned": false, 00:08:13.518 "supported_io_types": { 00:08:13.518 "read": true, 00:08:13.518 "write": true, 00:08:13.518 "unmap": true, 00:08:13.518 "flush": true, 00:08:13.518 "reset": true, 00:08:13.518 "nvme_admin": false, 00:08:13.518 "nvme_io": false, 00:08:13.518 "nvme_io_md": false, 00:08:13.518 "write_zeroes": true, 00:08:13.518 "zcopy": true, 00:08:13.518 "get_zone_info": false, 00:08:13.518 "zone_management": false, 00:08:13.518 "zone_append": false, 00:08:13.518 "compare": false, 00:08:13.518 "compare_and_write": false, 00:08:13.518 "abort": true, 00:08:13.518 "seek_hole": false, 00:08:13.518 "seek_data": false, 00:08:13.518 "copy": true, 00:08:13.518 "nvme_iov_md": false 00:08:13.518 }, 00:08:13.518 "memory_domains": [ 00:08:13.518 { 00:08:13.518 "dma_device_id": "system", 00:08:13.518 "dma_device_type": 1 00:08:13.518 }, 00:08:13.518 { 00:08:13.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.518 "dma_device_type": 2 00:08:13.518 } 00:08:13.518 ], 00:08:13.518 "driver_specific": {} 00:08:13.518 } 00:08:13.518 ] 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.518 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.519 [2024-11-26 12:51:30.982077] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.519 [2024-11-26 12:51:30.982173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.519 [2024-11-26 12:51:30.982242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.519 [2024-11-26 12:51:30.984143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.519 12:51:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.519 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.519 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.519 "name": "Existed_Raid", 00:08:13.519 "uuid": "1c98c425-2f6a-415f-a472-e86c06c45783", 00:08:13.519 "strip_size_kb": 64, 00:08:13.519 "state": "configuring", 00:08:13.519 "raid_level": "raid0", 00:08:13.519 "superblock": true, 00:08:13.519 "num_base_bdevs": 3, 00:08:13.519 "num_base_bdevs_discovered": 2, 00:08:13.519 "num_base_bdevs_operational": 3, 00:08:13.519 "base_bdevs_list": [ 00:08:13.519 { 00:08:13.519 "name": "BaseBdev1", 00:08:13.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.519 "is_configured": false, 00:08:13.519 "data_offset": 0, 00:08:13.519 "data_size": 0 00:08:13.519 }, 00:08:13.519 { 00:08:13.519 "name": "BaseBdev2", 00:08:13.519 "uuid": "a0a80153-949c-4d71-8b73-5453aba47691", 00:08:13.519 "is_configured": true, 00:08:13.519 "data_offset": 2048, 00:08:13.519 "data_size": 63488 00:08:13.519 }, 00:08:13.519 { 00:08:13.519 "name": "BaseBdev3", 00:08:13.519 "uuid": "37ecef73-840f-493f-a9ca-7e7bba0c3be7", 00:08:13.519 "is_configured": true, 00:08:13.519 "data_offset": 2048, 00:08:13.519 "data_size": 63488 00:08:13.519 } 00:08:13.519 ] 00:08:13.519 }' 00:08:13.519 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.519 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.779 [2024-11-26 12:51:31.409297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.779 "name": "Existed_Raid", 00:08:13.779 "uuid": "1c98c425-2f6a-415f-a472-e86c06c45783", 00:08:13.779 "strip_size_kb": 64, 00:08:13.779 "state": "configuring", 00:08:13.779 "raid_level": "raid0", 00:08:13.779 "superblock": true, 00:08:13.779 "num_base_bdevs": 3, 00:08:13.779 "num_base_bdevs_discovered": 1, 00:08:13.779 "num_base_bdevs_operational": 3, 00:08:13.779 "base_bdevs_list": [ 00:08:13.779 { 00:08:13.779 "name": "BaseBdev1", 00:08:13.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.779 "is_configured": false, 00:08:13.779 "data_offset": 0, 00:08:13.779 "data_size": 0 00:08:13.779 }, 00:08:13.779 { 00:08:13.779 "name": null, 00:08:13.779 "uuid": "a0a80153-949c-4d71-8b73-5453aba47691", 00:08:13.779 "is_configured": false, 00:08:13.779 "data_offset": 0, 00:08:13.779 "data_size": 63488 00:08:13.779 }, 00:08:13.779 { 00:08:13.779 "name": "BaseBdev3", 00:08:13.779 "uuid": "37ecef73-840f-493f-a9ca-7e7bba0c3be7", 00:08:13.779 "is_configured": true, 00:08:13.779 "data_offset": 2048, 00:08:13.779 "data_size": 63488 00:08:13.779 } 00:08:13.779 ] 00:08:13.779 }' 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.779 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.348 [2024-11-26 12:51:31.876100] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.348 BaseBdev1 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.348 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.349 [ 00:08:14.349 { 00:08:14.349 "name": "BaseBdev1", 00:08:14.349 "aliases": [ 00:08:14.349 "906962cd-9135-457d-9daa-2de63215d0bf" 00:08:14.349 ], 00:08:14.349 "product_name": "Malloc disk", 00:08:14.349 "block_size": 512, 00:08:14.349 "num_blocks": 65536, 00:08:14.349 "uuid": "906962cd-9135-457d-9daa-2de63215d0bf", 00:08:14.349 "assigned_rate_limits": { 00:08:14.349 "rw_ios_per_sec": 0, 00:08:14.349 "rw_mbytes_per_sec": 0, 00:08:14.349 "r_mbytes_per_sec": 0, 00:08:14.349 "w_mbytes_per_sec": 0 00:08:14.349 }, 00:08:14.349 "claimed": true, 00:08:14.349 "claim_type": "exclusive_write", 00:08:14.349 "zoned": false, 00:08:14.349 "supported_io_types": { 00:08:14.349 "read": true, 00:08:14.349 "write": true, 00:08:14.349 "unmap": true, 00:08:14.349 "flush": true, 00:08:14.349 "reset": true, 00:08:14.349 "nvme_admin": false, 00:08:14.349 "nvme_io": false, 00:08:14.349 "nvme_io_md": false, 00:08:14.349 "write_zeroes": true, 00:08:14.349 "zcopy": true, 00:08:14.349 "get_zone_info": false, 00:08:14.349 "zone_management": false, 00:08:14.349 "zone_append": false, 00:08:14.349 "compare": false, 00:08:14.349 "compare_and_write": false, 00:08:14.349 "abort": true, 00:08:14.349 "seek_hole": false, 00:08:14.349 "seek_data": false, 00:08:14.349 "copy": true, 00:08:14.349 "nvme_iov_md": false 00:08:14.349 }, 00:08:14.349 "memory_domains": [ 00:08:14.349 { 00:08:14.349 "dma_device_id": "system", 00:08:14.349 "dma_device_type": 1 00:08:14.349 }, 00:08:14.349 { 00:08:14.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.349 "dma_device_type": 2 00:08:14.349 } 00:08:14.349 ], 00:08:14.349 "driver_specific": {} 00:08:14.349 } 00:08:14.349 ] 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.349 "name": "Existed_Raid", 00:08:14.349 "uuid": "1c98c425-2f6a-415f-a472-e86c06c45783", 00:08:14.349 "strip_size_kb": 64, 00:08:14.349 "state": "configuring", 00:08:14.349 "raid_level": "raid0", 00:08:14.349 "superblock": true, 00:08:14.349 "num_base_bdevs": 3, 00:08:14.349 "num_base_bdevs_discovered": 2, 00:08:14.349 "num_base_bdevs_operational": 3, 00:08:14.349 "base_bdevs_list": [ 00:08:14.349 { 00:08:14.349 "name": "BaseBdev1", 00:08:14.349 "uuid": "906962cd-9135-457d-9daa-2de63215d0bf", 00:08:14.349 "is_configured": true, 00:08:14.349 "data_offset": 2048, 00:08:14.349 "data_size": 63488 00:08:14.349 }, 00:08:14.349 { 00:08:14.349 "name": null, 00:08:14.349 "uuid": "a0a80153-949c-4d71-8b73-5453aba47691", 00:08:14.349 "is_configured": false, 00:08:14.349 "data_offset": 0, 00:08:14.349 "data_size": 63488 00:08:14.349 }, 00:08:14.349 { 00:08:14.349 "name": "BaseBdev3", 00:08:14.349 "uuid": "37ecef73-840f-493f-a9ca-7e7bba0c3be7", 00:08:14.349 "is_configured": true, 00:08:14.349 "data_offset": 2048, 00:08:14.349 "data_size": 63488 00:08:14.349 } 00:08:14.349 ] 00:08:14.349 }' 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.349 12:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.918 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:14.918 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.918 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.918 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.918 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.918 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:14.918 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:14.918 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.918 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.918 [2024-11-26 12:51:32.423270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.919 "name": "Existed_Raid", 00:08:14.919 "uuid": "1c98c425-2f6a-415f-a472-e86c06c45783", 00:08:14.919 "strip_size_kb": 64, 00:08:14.919 "state": "configuring", 00:08:14.919 "raid_level": "raid0", 00:08:14.919 "superblock": true, 00:08:14.919 "num_base_bdevs": 3, 00:08:14.919 "num_base_bdevs_discovered": 1, 00:08:14.919 "num_base_bdevs_operational": 3, 00:08:14.919 "base_bdevs_list": [ 00:08:14.919 { 00:08:14.919 "name": "BaseBdev1", 00:08:14.919 "uuid": "906962cd-9135-457d-9daa-2de63215d0bf", 00:08:14.919 "is_configured": true, 00:08:14.919 "data_offset": 2048, 00:08:14.919 "data_size": 63488 00:08:14.919 }, 00:08:14.919 { 00:08:14.919 "name": null, 00:08:14.919 "uuid": "a0a80153-949c-4d71-8b73-5453aba47691", 00:08:14.919 "is_configured": false, 00:08:14.919 "data_offset": 0, 00:08:14.919 "data_size": 63488 00:08:14.919 }, 00:08:14.919 { 00:08:14.919 "name": null, 00:08:14.919 "uuid": "37ecef73-840f-493f-a9ca-7e7bba0c3be7", 00:08:14.919 "is_configured": false, 00:08:14.919 "data_offset": 0, 00:08:14.919 "data_size": 63488 00:08:14.919 } 00:08:14.919 ] 00:08:14.919 }' 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.919 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.178 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.178 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.178 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.178 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.178 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.443 [2024-11-26 12:51:32.878630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.443 "name": "Existed_Raid", 00:08:15.443 "uuid": "1c98c425-2f6a-415f-a472-e86c06c45783", 00:08:15.443 "strip_size_kb": 64, 00:08:15.443 "state": "configuring", 00:08:15.443 "raid_level": "raid0", 00:08:15.443 "superblock": true, 00:08:15.443 "num_base_bdevs": 3, 00:08:15.443 "num_base_bdevs_discovered": 2, 00:08:15.443 "num_base_bdevs_operational": 3, 00:08:15.443 "base_bdevs_list": [ 00:08:15.443 { 00:08:15.443 "name": "BaseBdev1", 00:08:15.443 "uuid": "906962cd-9135-457d-9daa-2de63215d0bf", 00:08:15.443 "is_configured": true, 00:08:15.443 "data_offset": 2048, 00:08:15.443 "data_size": 63488 00:08:15.443 }, 00:08:15.443 { 00:08:15.443 "name": null, 00:08:15.443 "uuid": "a0a80153-949c-4d71-8b73-5453aba47691", 00:08:15.443 "is_configured": false, 00:08:15.443 "data_offset": 0, 00:08:15.443 "data_size": 63488 00:08:15.443 }, 00:08:15.443 { 00:08:15.443 "name": "BaseBdev3", 00:08:15.443 "uuid": "37ecef73-840f-493f-a9ca-7e7bba0c3be7", 00:08:15.443 "is_configured": true, 00:08:15.443 "data_offset": 2048, 00:08:15.443 "data_size": 63488 00:08:15.443 } 00:08:15.443 ] 00:08:15.443 }' 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.443 12:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.704 [2024-11-26 12:51:33.325882] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.704 "name": "Existed_Raid", 00:08:15.704 "uuid": "1c98c425-2f6a-415f-a472-e86c06c45783", 00:08:15.704 "strip_size_kb": 64, 00:08:15.704 "state": "configuring", 00:08:15.704 "raid_level": "raid0", 00:08:15.704 "superblock": true, 00:08:15.704 "num_base_bdevs": 3, 00:08:15.704 "num_base_bdevs_discovered": 1, 00:08:15.704 "num_base_bdevs_operational": 3, 00:08:15.704 "base_bdevs_list": [ 00:08:15.704 { 00:08:15.704 "name": null, 00:08:15.704 "uuid": "906962cd-9135-457d-9daa-2de63215d0bf", 00:08:15.704 "is_configured": false, 00:08:15.704 "data_offset": 0, 00:08:15.704 "data_size": 63488 00:08:15.704 }, 00:08:15.704 { 00:08:15.704 "name": null, 00:08:15.704 "uuid": "a0a80153-949c-4d71-8b73-5453aba47691", 00:08:15.704 "is_configured": false, 00:08:15.704 "data_offset": 0, 00:08:15.704 "data_size": 63488 00:08:15.704 }, 00:08:15.704 { 00:08:15.704 "name": "BaseBdev3", 00:08:15.704 "uuid": "37ecef73-840f-493f-a9ca-7e7bba0c3be7", 00:08:15.704 "is_configured": true, 00:08:15.704 "data_offset": 2048, 00:08:15.704 "data_size": 63488 00:08:15.704 } 00:08:15.704 ] 00:08:15.704 }' 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.704 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.274 [2024-11-26 12:51:33.827531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.274 "name": "Existed_Raid", 00:08:16.274 "uuid": "1c98c425-2f6a-415f-a472-e86c06c45783", 00:08:16.274 "strip_size_kb": 64, 00:08:16.274 "state": "configuring", 00:08:16.274 "raid_level": "raid0", 00:08:16.274 "superblock": true, 00:08:16.274 "num_base_bdevs": 3, 00:08:16.274 "num_base_bdevs_discovered": 2, 00:08:16.274 "num_base_bdevs_operational": 3, 00:08:16.274 "base_bdevs_list": [ 00:08:16.274 { 00:08:16.274 "name": null, 00:08:16.274 "uuid": "906962cd-9135-457d-9daa-2de63215d0bf", 00:08:16.274 "is_configured": false, 00:08:16.274 "data_offset": 0, 00:08:16.274 "data_size": 63488 00:08:16.274 }, 00:08:16.274 { 00:08:16.274 "name": "BaseBdev2", 00:08:16.274 "uuid": "a0a80153-949c-4d71-8b73-5453aba47691", 00:08:16.274 "is_configured": true, 00:08:16.274 "data_offset": 2048, 00:08:16.274 "data_size": 63488 00:08:16.274 }, 00:08:16.274 { 00:08:16.274 "name": "BaseBdev3", 00:08:16.274 "uuid": "37ecef73-840f-493f-a9ca-7e7bba0c3be7", 00:08:16.274 "is_configured": true, 00:08:16.274 "data_offset": 2048, 00:08:16.274 "data_size": 63488 00:08:16.274 } 00:08:16.274 ] 00:08:16.274 }' 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.274 12:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 906962cd-9135-457d-9daa-2de63215d0bf 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.844 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.844 NewBaseBdev 00:08:16.844 [2024-11-26 12:51:34.373688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:16.845 [2024-11-26 12:51:34.373849] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:16.845 [2024-11-26 12:51:34.373864] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:16.845 [2024-11-26 12:51:34.374094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:16.845 [2024-11-26 12:51:34.374220] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:16.845 [2024-11-26 12:51:34.374230] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:16.845 [2024-11-26 12:51:34.374336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.845 [ 00:08:16.845 { 00:08:16.845 "name": "NewBaseBdev", 00:08:16.845 "aliases": [ 00:08:16.845 "906962cd-9135-457d-9daa-2de63215d0bf" 00:08:16.845 ], 00:08:16.845 "product_name": "Malloc disk", 00:08:16.845 "block_size": 512, 00:08:16.845 "num_blocks": 65536, 00:08:16.845 "uuid": "906962cd-9135-457d-9daa-2de63215d0bf", 00:08:16.845 "assigned_rate_limits": { 00:08:16.845 "rw_ios_per_sec": 0, 00:08:16.845 "rw_mbytes_per_sec": 0, 00:08:16.845 "r_mbytes_per_sec": 0, 00:08:16.845 "w_mbytes_per_sec": 0 00:08:16.845 }, 00:08:16.845 "claimed": true, 00:08:16.845 "claim_type": "exclusive_write", 00:08:16.845 "zoned": false, 00:08:16.845 "supported_io_types": { 00:08:16.845 "read": true, 00:08:16.845 "write": true, 00:08:16.845 "unmap": true, 00:08:16.845 "flush": true, 00:08:16.845 "reset": true, 00:08:16.845 "nvme_admin": false, 00:08:16.845 "nvme_io": false, 00:08:16.845 "nvme_io_md": false, 00:08:16.845 "write_zeroes": true, 00:08:16.845 "zcopy": true, 00:08:16.845 "get_zone_info": false, 00:08:16.845 "zone_management": false, 00:08:16.845 "zone_append": false, 00:08:16.845 "compare": false, 00:08:16.845 "compare_and_write": false, 00:08:16.845 "abort": true, 00:08:16.845 "seek_hole": false, 00:08:16.845 "seek_data": false, 00:08:16.845 "copy": true, 00:08:16.845 "nvme_iov_md": false 00:08:16.845 }, 00:08:16.845 "memory_domains": [ 00:08:16.845 { 00:08:16.845 "dma_device_id": "system", 00:08:16.845 "dma_device_type": 1 00:08:16.845 }, 00:08:16.845 { 00:08:16.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.845 "dma_device_type": 2 00:08:16.845 } 00:08:16.845 ], 00:08:16.845 "driver_specific": {} 00:08:16.845 } 00:08:16.845 ] 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.845 "name": "Existed_Raid", 00:08:16.845 "uuid": "1c98c425-2f6a-415f-a472-e86c06c45783", 00:08:16.845 "strip_size_kb": 64, 00:08:16.845 "state": "online", 00:08:16.845 "raid_level": "raid0", 00:08:16.845 "superblock": true, 00:08:16.845 "num_base_bdevs": 3, 00:08:16.845 "num_base_bdevs_discovered": 3, 00:08:16.845 "num_base_bdevs_operational": 3, 00:08:16.845 "base_bdevs_list": [ 00:08:16.845 { 00:08:16.845 "name": "NewBaseBdev", 00:08:16.845 "uuid": "906962cd-9135-457d-9daa-2de63215d0bf", 00:08:16.845 "is_configured": true, 00:08:16.845 "data_offset": 2048, 00:08:16.845 "data_size": 63488 00:08:16.845 }, 00:08:16.845 { 00:08:16.845 "name": "BaseBdev2", 00:08:16.845 "uuid": "a0a80153-949c-4d71-8b73-5453aba47691", 00:08:16.845 "is_configured": true, 00:08:16.845 "data_offset": 2048, 00:08:16.845 "data_size": 63488 00:08:16.845 }, 00:08:16.845 { 00:08:16.845 "name": "BaseBdev3", 00:08:16.845 "uuid": "37ecef73-840f-493f-a9ca-7e7bba0c3be7", 00:08:16.845 "is_configured": true, 00:08:16.845 "data_offset": 2048, 00:08:16.845 "data_size": 63488 00:08:16.845 } 00:08:16.845 ] 00:08:16.845 }' 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.845 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.413 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.413 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.414 [2024-11-26 12:51:34.813226] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.414 "name": "Existed_Raid", 00:08:17.414 "aliases": [ 00:08:17.414 "1c98c425-2f6a-415f-a472-e86c06c45783" 00:08:17.414 ], 00:08:17.414 "product_name": "Raid Volume", 00:08:17.414 "block_size": 512, 00:08:17.414 "num_blocks": 190464, 00:08:17.414 "uuid": "1c98c425-2f6a-415f-a472-e86c06c45783", 00:08:17.414 "assigned_rate_limits": { 00:08:17.414 "rw_ios_per_sec": 0, 00:08:17.414 "rw_mbytes_per_sec": 0, 00:08:17.414 "r_mbytes_per_sec": 0, 00:08:17.414 "w_mbytes_per_sec": 0 00:08:17.414 }, 00:08:17.414 "claimed": false, 00:08:17.414 "zoned": false, 00:08:17.414 "supported_io_types": { 00:08:17.414 "read": true, 00:08:17.414 "write": true, 00:08:17.414 "unmap": true, 00:08:17.414 "flush": true, 00:08:17.414 "reset": true, 00:08:17.414 "nvme_admin": false, 00:08:17.414 "nvme_io": false, 00:08:17.414 "nvme_io_md": false, 00:08:17.414 "write_zeroes": true, 00:08:17.414 "zcopy": false, 00:08:17.414 "get_zone_info": false, 00:08:17.414 "zone_management": false, 00:08:17.414 "zone_append": false, 00:08:17.414 "compare": false, 00:08:17.414 "compare_and_write": false, 00:08:17.414 "abort": false, 00:08:17.414 "seek_hole": false, 00:08:17.414 "seek_data": false, 00:08:17.414 "copy": false, 00:08:17.414 "nvme_iov_md": false 00:08:17.414 }, 00:08:17.414 "memory_domains": [ 00:08:17.414 { 00:08:17.414 "dma_device_id": "system", 00:08:17.414 "dma_device_type": 1 00:08:17.414 }, 00:08:17.414 { 00:08:17.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.414 "dma_device_type": 2 00:08:17.414 }, 00:08:17.414 { 00:08:17.414 "dma_device_id": "system", 00:08:17.414 "dma_device_type": 1 00:08:17.414 }, 00:08:17.414 { 00:08:17.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.414 "dma_device_type": 2 00:08:17.414 }, 00:08:17.414 { 00:08:17.414 "dma_device_id": "system", 00:08:17.414 "dma_device_type": 1 00:08:17.414 }, 00:08:17.414 { 00:08:17.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.414 "dma_device_type": 2 00:08:17.414 } 00:08:17.414 ], 00:08:17.414 "driver_specific": { 00:08:17.414 "raid": { 00:08:17.414 "uuid": "1c98c425-2f6a-415f-a472-e86c06c45783", 00:08:17.414 "strip_size_kb": 64, 00:08:17.414 "state": "online", 00:08:17.414 "raid_level": "raid0", 00:08:17.414 "superblock": true, 00:08:17.414 "num_base_bdevs": 3, 00:08:17.414 "num_base_bdevs_discovered": 3, 00:08:17.414 "num_base_bdevs_operational": 3, 00:08:17.414 "base_bdevs_list": [ 00:08:17.414 { 00:08:17.414 "name": "NewBaseBdev", 00:08:17.414 "uuid": "906962cd-9135-457d-9daa-2de63215d0bf", 00:08:17.414 "is_configured": true, 00:08:17.414 "data_offset": 2048, 00:08:17.414 "data_size": 63488 00:08:17.414 }, 00:08:17.414 { 00:08:17.414 "name": "BaseBdev2", 00:08:17.414 "uuid": "a0a80153-949c-4d71-8b73-5453aba47691", 00:08:17.414 "is_configured": true, 00:08:17.414 "data_offset": 2048, 00:08:17.414 "data_size": 63488 00:08:17.414 }, 00:08:17.414 { 00:08:17.414 "name": "BaseBdev3", 00:08:17.414 "uuid": "37ecef73-840f-493f-a9ca-7e7bba0c3be7", 00:08:17.414 "is_configured": true, 00:08:17.414 "data_offset": 2048, 00:08:17.414 "data_size": 63488 00:08:17.414 } 00:08:17.414 ] 00:08:17.414 } 00:08:17.414 } 00:08:17.414 }' 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:17.414 BaseBdev2 00:08:17.414 BaseBdev3' 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.414 12:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.414 [2024-11-26 12:51:35.080504] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.414 [2024-11-26 12:51:35.080571] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.414 [2024-11-26 12:51:35.080658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.414 [2024-11-26 12:51:35.080707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.414 [2024-11-26 12:51:35.080720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75871 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75871 ']' 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75871 00:08:17.414 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:17.673 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.673 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75871 00:08:17.673 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.673 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.673 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75871' 00:08:17.673 killing process with pid 75871 00:08:17.673 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75871 00:08:17.673 [2024-11-26 12:51:35.119359] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.673 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75871 00:08:17.673 [2024-11-26 12:51:35.150236] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.932 12:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:17.932 00:08:17.932 real 0m8.498s 00:08:17.932 user 0m14.466s 00:08:17.932 sys 0m1.722s 00:08:17.932 ************************************ 00:08:17.932 END TEST raid_state_function_test_sb 00:08:17.932 ************************************ 00:08:17.932 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.932 12:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.932 12:51:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:17.932 12:51:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:17.932 12:51:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.932 12:51:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.932 ************************************ 00:08:17.932 START TEST raid_superblock_test 00:08:17.932 ************************************ 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76469 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76469 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76469 ']' 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.932 12:51:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.932 [2024-11-26 12:51:35.545010] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:17.932 [2024-11-26 12:51:35.545247] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76469 ] 00:08:18.191 [2024-11-26 12:51:35.711882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.191 [2024-11-26 12:51:35.756585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.191 [2024-11-26 12:51:35.798036] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.191 [2024-11-26 12:51:35.798171] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.759 malloc1 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.759 [2024-11-26 12:51:36.392105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.759 [2024-11-26 12:51:36.392197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.759 [2024-11-26 12:51:36.392227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:18.759 [2024-11-26 12:51:36.392255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.759 [2024-11-26 12:51:36.394359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.759 [2024-11-26 12:51:36.394402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.759 pt1 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.759 malloc2 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.759 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.759 [2024-11-26 12:51:36.435164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:18.759 [2024-11-26 12:51:36.435402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.759 [2024-11-26 12:51:36.435515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:18.759 [2024-11-26 12:51:36.435647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.018 [2024-11-26 12:51:36.440606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.018 [2024-11-26 12:51:36.440772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.018 pt2 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.018 malloc3 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.018 [2024-11-26 12:51:36.470784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:19.018 [2024-11-26 12:51:36.470892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.018 [2024-11-26 12:51:36.470937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:19.018 [2024-11-26 12:51:36.471001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.018 [2024-11-26 12:51:36.473088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.018 [2024-11-26 12:51:36.473162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:19.018 pt3 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.018 [2024-11-26 12:51:36.482811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.018 [2024-11-26 12:51:36.484676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.018 [2024-11-26 12:51:36.484780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:19.018 [2024-11-26 12:51:36.484956] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:19.018 [2024-11-26 12:51:36.485013] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:19.018 [2024-11-26 12:51:36.485314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:19.018 [2024-11-26 12:51:36.485499] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:19.018 [2024-11-26 12:51:36.485550] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:19.018 [2024-11-26 12:51:36.485719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.018 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.019 "name": "raid_bdev1", 00:08:19.019 "uuid": "0083de72-805c-43b7-8a86-5d021c2a1486", 00:08:19.019 "strip_size_kb": 64, 00:08:19.019 "state": "online", 00:08:19.019 "raid_level": "raid0", 00:08:19.019 "superblock": true, 00:08:19.019 "num_base_bdevs": 3, 00:08:19.019 "num_base_bdevs_discovered": 3, 00:08:19.019 "num_base_bdevs_operational": 3, 00:08:19.019 "base_bdevs_list": [ 00:08:19.019 { 00:08:19.019 "name": "pt1", 00:08:19.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.019 "is_configured": true, 00:08:19.019 "data_offset": 2048, 00:08:19.019 "data_size": 63488 00:08:19.019 }, 00:08:19.019 { 00:08:19.019 "name": "pt2", 00:08:19.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.019 "is_configured": true, 00:08:19.019 "data_offset": 2048, 00:08:19.019 "data_size": 63488 00:08:19.019 }, 00:08:19.019 { 00:08:19.019 "name": "pt3", 00:08:19.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.019 "is_configured": true, 00:08:19.019 "data_offset": 2048, 00:08:19.019 "data_size": 63488 00:08:19.019 } 00:08:19.019 ] 00:08:19.019 }' 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.019 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.289 [2024-11-26 12:51:36.878424] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.289 "name": "raid_bdev1", 00:08:19.289 "aliases": [ 00:08:19.289 "0083de72-805c-43b7-8a86-5d021c2a1486" 00:08:19.289 ], 00:08:19.289 "product_name": "Raid Volume", 00:08:19.289 "block_size": 512, 00:08:19.289 "num_blocks": 190464, 00:08:19.289 "uuid": "0083de72-805c-43b7-8a86-5d021c2a1486", 00:08:19.289 "assigned_rate_limits": { 00:08:19.289 "rw_ios_per_sec": 0, 00:08:19.289 "rw_mbytes_per_sec": 0, 00:08:19.289 "r_mbytes_per_sec": 0, 00:08:19.289 "w_mbytes_per_sec": 0 00:08:19.289 }, 00:08:19.289 "claimed": false, 00:08:19.289 "zoned": false, 00:08:19.289 "supported_io_types": { 00:08:19.289 "read": true, 00:08:19.289 "write": true, 00:08:19.289 "unmap": true, 00:08:19.289 "flush": true, 00:08:19.289 "reset": true, 00:08:19.289 "nvme_admin": false, 00:08:19.289 "nvme_io": false, 00:08:19.289 "nvme_io_md": false, 00:08:19.289 "write_zeroes": true, 00:08:19.289 "zcopy": false, 00:08:19.289 "get_zone_info": false, 00:08:19.289 "zone_management": false, 00:08:19.289 "zone_append": false, 00:08:19.289 "compare": false, 00:08:19.289 "compare_and_write": false, 00:08:19.289 "abort": false, 00:08:19.289 "seek_hole": false, 00:08:19.289 "seek_data": false, 00:08:19.289 "copy": false, 00:08:19.289 "nvme_iov_md": false 00:08:19.289 }, 00:08:19.289 "memory_domains": [ 00:08:19.289 { 00:08:19.289 "dma_device_id": "system", 00:08:19.289 "dma_device_type": 1 00:08:19.289 }, 00:08:19.289 { 00:08:19.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.289 "dma_device_type": 2 00:08:19.289 }, 00:08:19.289 { 00:08:19.289 "dma_device_id": "system", 00:08:19.289 "dma_device_type": 1 00:08:19.289 }, 00:08:19.289 { 00:08:19.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.289 "dma_device_type": 2 00:08:19.289 }, 00:08:19.289 { 00:08:19.289 "dma_device_id": "system", 00:08:19.289 "dma_device_type": 1 00:08:19.289 }, 00:08:19.289 { 00:08:19.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.289 "dma_device_type": 2 00:08:19.289 } 00:08:19.289 ], 00:08:19.289 "driver_specific": { 00:08:19.289 "raid": { 00:08:19.289 "uuid": "0083de72-805c-43b7-8a86-5d021c2a1486", 00:08:19.289 "strip_size_kb": 64, 00:08:19.289 "state": "online", 00:08:19.289 "raid_level": "raid0", 00:08:19.289 "superblock": true, 00:08:19.289 "num_base_bdevs": 3, 00:08:19.289 "num_base_bdevs_discovered": 3, 00:08:19.289 "num_base_bdevs_operational": 3, 00:08:19.289 "base_bdevs_list": [ 00:08:19.289 { 00:08:19.289 "name": "pt1", 00:08:19.289 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.289 "is_configured": true, 00:08:19.289 "data_offset": 2048, 00:08:19.289 "data_size": 63488 00:08:19.289 }, 00:08:19.289 { 00:08:19.289 "name": "pt2", 00:08:19.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.289 "is_configured": true, 00:08:19.289 "data_offset": 2048, 00:08:19.289 "data_size": 63488 00:08:19.289 }, 00:08:19.289 { 00:08:19.289 "name": "pt3", 00:08:19.289 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.289 "is_configured": true, 00:08:19.289 "data_offset": 2048, 00:08:19.289 "data_size": 63488 00:08:19.289 } 00:08:19.289 ] 00:08:19.289 } 00:08:19.289 } 00:08:19.289 }' 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:19.289 pt2 00:08:19.289 pt3' 00:08:19.289 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.562 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.562 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.562 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:19.562 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.562 12:51:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.562 12:51:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.562 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.563 [2024-11-26 12:51:37.141913] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0083de72-805c-43b7-8a86-5d021c2a1486 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0083de72-805c-43b7-8a86-5d021c2a1486 ']' 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.563 [2024-11-26 12:51:37.177601] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.563 [2024-11-26 12:51:37.177665] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.563 [2024-11-26 12:51:37.177767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.563 [2024-11-26 12:51:37.177848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.563 [2024-11-26 12:51:37.177883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.563 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.822 [2024-11-26 12:51:37.317381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:19.822 [2024-11-26 12:51:37.319225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:19.822 [2024-11-26 12:51:37.319277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:19.822 [2024-11-26 12:51:37.319325] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:19.822 [2024-11-26 12:51:37.319370] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:19.822 [2024-11-26 12:51:37.319396] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:19.822 [2024-11-26 12:51:37.319412] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.822 [2024-11-26 12:51:37.319426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:19.822 request: 00:08:19.822 { 00:08:19.822 "name": "raid_bdev1", 00:08:19.822 "raid_level": "raid0", 00:08:19.822 "base_bdevs": [ 00:08:19.822 "malloc1", 00:08:19.822 "malloc2", 00:08:19.822 "malloc3" 00:08:19.822 ], 00:08:19.822 "strip_size_kb": 64, 00:08:19.822 "superblock": false, 00:08:19.822 "method": "bdev_raid_create", 00:08:19.822 "req_id": 1 00:08:19.822 } 00:08:19.822 Got JSON-RPC error response 00:08:19.822 response: 00:08:19.822 { 00:08:19.822 "code": -17, 00:08:19.822 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:19.822 } 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:19.822 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.823 [2024-11-26 12:51:37.381277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.823 [2024-11-26 12:51:37.381369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.823 [2024-11-26 12:51:37.381396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:19.823 [2024-11-26 12:51:37.381411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.823 [2024-11-26 12:51:37.383508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.823 [2024-11-26 12:51:37.383544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.823 [2024-11-26 12:51:37.383631] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:19.823 [2024-11-26 12:51:37.383677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.823 pt1 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.823 "name": "raid_bdev1", 00:08:19.823 "uuid": "0083de72-805c-43b7-8a86-5d021c2a1486", 00:08:19.823 "strip_size_kb": 64, 00:08:19.823 "state": "configuring", 00:08:19.823 "raid_level": "raid0", 00:08:19.823 "superblock": true, 00:08:19.823 "num_base_bdevs": 3, 00:08:19.823 "num_base_bdevs_discovered": 1, 00:08:19.823 "num_base_bdevs_operational": 3, 00:08:19.823 "base_bdevs_list": [ 00:08:19.823 { 00:08:19.823 "name": "pt1", 00:08:19.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.823 "is_configured": true, 00:08:19.823 "data_offset": 2048, 00:08:19.823 "data_size": 63488 00:08:19.823 }, 00:08:19.823 { 00:08:19.823 "name": null, 00:08:19.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.823 "is_configured": false, 00:08:19.823 "data_offset": 2048, 00:08:19.823 "data_size": 63488 00:08:19.823 }, 00:08:19.823 { 00:08:19.823 "name": null, 00:08:19.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.823 "is_configured": false, 00:08:19.823 "data_offset": 2048, 00:08:19.823 "data_size": 63488 00:08:19.823 } 00:08:19.823 ] 00:08:19.823 }' 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.823 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.392 [2024-11-26 12:51:37.776614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.392 [2024-11-26 12:51:37.776738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.392 [2024-11-26 12:51:37.776766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:20.392 [2024-11-26 12:51:37.776783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.392 [2024-11-26 12:51:37.777199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.392 [2024-11-26 12:51:37.777230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.392 [2024-11-26 12:51:37.777318] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:20.392 [2024-11-26 12:51:37.777352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.392 pt2 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.392 [2024-11-26 12:51:37.788597] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.392 "name": "raid_bdev1", 00:08:20.392 "uuid": "0083de72-805c-43b7-8a86-5d021c2a1486", 00:08:20.392 "strip_size_kb": 64, 00:08:20.392 "state": "configuring", 00:08:20.392 "raid_level": "raid0", 00:08:20.392 "superblock": true, 00:08:20.392 "num_base_bdevs": 3, 00:08:20.392 "num_base_bdevs_discovered": 1, 00:08:20.392 "num_base_bdevs_operational": 3, 00:08:20.392 "base_bdevs_list": [ 00:08:20.392 { 00:08:20.392 "name": "pt1", 00:08:20.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.392 "is_configured": true, 00:08:20.392 "data_offset": 2048, 00:08:20.392 "data_size": 63488 00:08:20.392 }, 00:08:20.392 { 00:08:20.392 "name": null, 00:08:20.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.392 "is_configured": false, 00:08:20.392 "data_offset": 0, 00:08:20.392 "data_size": 63488 00:08:20.392 }, 00:08:20.392 { 00:08:20.392 "name": null, 00:08:20.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.392 "is_configured": false, 00:08:20.392 "data_offset": 2048, 00:08:20.392 "data_size": 63488 00:08:20.392 } 00:08:20.392 ] 00:08:20.392 }' 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.392 12:51:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.652 [2024-11-26 12:51:38.219891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.652 [2024-11-26 12:51:38.220013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.652 [2024-11-26 12:51:38.220062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:20.652 [2024-11-26 12:51:38.220104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.652 [2024-11-26 12:51:38.220567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.652 [2024-11-26 12:51:38.220636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.652 [2024-11-26 12:51:38.220773] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:20.652 [2024-11-26 12:51:38.220839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.652 pt2 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.652 [2024-11-26 12:51:38.231838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:20.652 [2024-11-26 12:51:38.231935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.652 [2024-11-26 12:51:38.231982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:20.652 [2024-11-26 12:51:38.232029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.652 [2024-11-26 12:51:38.232415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.652 [2024-11-26 12:51:38.232482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:20.652 [2024-11-26 12:51:38.232594] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:20.652 [2024-11-26 12:51:38.232659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:20.652 [2024-11-26 12:51:38.232799] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:20.652 [2024-11-26 12:51:38.232840] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.652 [2024-11-26 12:51:38.233088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:20.652 [2024-11-26 12:51:38.233249] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:20.652 [2024-11-26 12:51:38.233296] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:20.652 [2024-11-26 12:51:38.233444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.652 pt3 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.652 "name": "raid_bdev1", 00:08:20.652 "uuid": "0083de72-805c-43b7-8a86-5d021c2a1486", 00:08:20.652 "strip_size_kb": 64, 00:08:20.652 "state": "online", 00:08:20.652 "raid_level": "raid0", 00:08:20.652 "superblock": true, 00:08:20.652 "num_base_bdevs": 3, 00:08:20.652 "num_base_bdevs_discovered": 3, 00:08:20.652 "num_base_bdevs_operational": 3, 00:08:20.652 "base_bdevs_list": [ 00:08:20.652 { 00:08:20.652 "name": "pt1", 00:08:20.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.652 "is_configured": true, 00:08:20.652 "data_offset": 2048, 00:08:20.652 "data_size": 63488 00:08:20.652 }, 00:08:20.652 { 00:08:20.652 "name": "pt2", 00:08:20.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.652 "is_configured": true, 00:08:20.652 "data_offset": 2048, 00:08:20.652 "data_size": 63488 00:08:20.652 }, 00:08:20.652 { 00:08:20.652 "name": "pt3", 00:08:20.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.652 "is_configured": true, 00:08:20.652 "data_offset": 2048, 00:08:20.652 "data_size": 63488 00:08:20.652 } 00:08:20.652 ] 00:08:20.652 }' 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.652 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.222 [2024-11-26 12:51:38.659664] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.222 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.222 "name": "raid_bdev1", 00:08:21.222 "aliases": [ 00:08:21.222 "0083de72-805c-43b7-8a86-5d021c2a1486" 00:08:21.222 ], 00:08:21.222 "product_name": "Raid Volume", 00:08:21.222 "block_size": 512, 00:08:21.222 "num_blocks": 190464, 00:08:21.222 "uuid": "0083de72-805c-43b7-8a86-5d021c2a1486", 00:08:21.222 "assigned_rate_limits": { 00:08:21.222 "rw_ios_per_sec": 0, 00:08:21.222 "rw_mbytes_per_sec": 0, 00:08:21.222 "r_mbytes_per_sec": 0, 00:08:21.222 "w_mbytes_per_sec": 0 00:08:21.222 }, 00:08:21.222 "claimed": false, 00:08:21.222 "zoned": false, 00:08:21.222 "supported_io_types": { 00:08:21.222 "read": true, 00:08:21.222 "write": true, 00:08:21.222 "unmap": true, 00:08:21.222 "flush": true, 00:08:21.222 "reset": true, 00:08:21.222 "nvme_admin": false, 00:08:21.222 "nvme_io": false, 00:08:21.222 "nvme_io_md": false, 00:08:21.222 "write_zeroes": true, 00:08:21.222 "zcopy": false, 00:08:21.222 "get_zone_info": false, 00:08:21.222 "zone_management": false, 00:08:21.222 "zone_append": false, 00:08:21.222 "compare": false, 00:08:21.222 "compare_and_write": false, 00:08:21.223 "abort": false, 00:08:21.223 "seek_hole": false, 00:08:21.223 "seek_data": false, 00:08:21.223 "copy": false, 00:08:21.223 "nvme_iov_md": false 00:08:21.223 }, 00:08:21.223 "memory_domains": [ 00:08:21.223 { 00:08:21.223 "dma_device_id": "system", 00:08:21.223 "dma_device_type": 1 00:08:21.223 }, 00:08:21.223 { 00:08:21.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.223 "dma_device_type": 2 00:08:21.223 }, 00:08:21.223 { 00:08:21.223 "dma_device_id": "system", 00:08:21.223 "dma_device_type": 1 00:08:21.223 }, 00:08:21.223 { 00:08:21.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.223 "dma_device_type": 2 00:08:21.223 }, 00:08:21.223 { 00:08:21.223 "dma_device_id": "system", 00:08:21.223 "dma_device_type": 1 00:08:21.223 }, 00:08:21.223 { 00:08:21.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.223 "dma_device_type": 2 00:08:21.223 } 00:08:21.223 ], 00:08:21.223 "driver_specific": { 00:08:21.223 "raid": { 00:08:21.223 "uuid": "0083de72-805c-43b7-8a86-5d021c2a1486", 00:08:21.223 "strip_size_kb": 64, 00:08:21.223 "state": "online", 00:08:21.223 "raid_level": "raid0", 00:08:21.223 "superblock": true, 00:08:21.223 "num_base_bdevs": 3, 00:08:21.223 "num_base_bdevs_discovered": 3, 00:08:21.223 "num_base_bdevs_operational": 3, 00:08:21.223 "base_bdevs_list": [ 00:08:21.223 { 00:08:21.223 "name": "pt1", 00:08:21.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.223 "is_configured": true, 00:08:21.223 "data_offset": 2048, 00:08:21.223 "data_size": 63488 00:08:21.223 }, 00:08:21.223 { 00:08:21.223 "name": "pt2", 00:08:21.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.223 "is_configured": true, 00:08:21.223 "data_offset": 2048, 00:08:21.223 "data_size": 63488 00:08:21.223 }, 00:08:21.223 { 00:08:21.223 "name": "pt3", 00:08:21.223 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:21.223 "is_configured": true, 00:08:21.223 "data_offset": 2048, 00:08:21.223 "data_size": 63488 00:08:21.223 } 00:08:21.223 ] 00:08:21.223 } 00:08:21.223 } 00:08:21.223 }' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:21.223 pt2 00:08:21.223 pt3' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.223 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:21.483 [2024-11-26 12:51:38.899524] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0083de72-805c-43b7-8a86-5d021c2a1486 '!=' 0083de72-805c-43b7-8a86-5d021c2a1486 ']' 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76469 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76469 ']' 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76469 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76469 00:08:21.483 killing process with pid 76469 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76469' 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76469 00:08:21.483 [2024-11-26 12:51:38.973116] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.483 [2024-11-26 12:51:38.973211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.483 [2024-11-26 12:51:38.973272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.483 [2024-11-26 12:51:38.973282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:21.483 12:51:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76469 00:08:21.483 [2024-11-26 12:51:39.006125] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.743 12:51:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:21.743 00:08:21.743 real 0m3.783s 00:08:21.743 user 0m5.842s 00:08:21.743 sys 0m0.850s 00:08:21.743 12:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.743 12:51:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.743 ************************************ 00:08:21.743 END TEST raid_superblock_test 00:08:21.743 ************************************ 00:08:21.743 12:51:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:21.743 12:51:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:21.743 12:51:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.743 12:51:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.743 ************************************ 00:08:21.743 START TEST raid_read_error_test 00:08:21.743 ************************************ 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kPCpBgLYL3 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76706 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76706 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76706 ']' 00:08:21.743 12:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.744 12:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.744 12:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.744 12:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.744 12:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.744 [2024-11-26 12:51:39.402811] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:21.744 [2024-11-26 12:51:39.403029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76706 ] 00:08:22.003 [2024-11-26 12:51:39.545024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.004 [2024-11-26 12:51:39.587874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.004 [2024-11-26 12:51:39.629422] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.004 [2024-11-26 12:51:39.629550] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.573 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.573 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:22.573 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.573 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.573 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.573 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.573 BaseBdev1_malloc 00:08:22.573 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.573 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.573 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.573 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.833 true 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.833 [2024-11-26 12:51:40.259214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.833 [2024-11-26 12:51:40.259328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.833 [2024-11-26 12:51:40.259359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:22.833 [2024-11-26 12:51:40.259371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.833 [2024-11-26 12:51:40.261501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.833 [2024-11-26 12:51:40.261548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.833 BaseBdev1 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.833 BaseBdev2_malloc 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.833 true 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.833 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.833 [2024-11-26 12:51:40.316388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:22.833 [2024-11-26 12:51:40.316462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.833 [2024-11-26 12:51:40.316501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:22.833 [2024-11-26 12:51:40.316521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.833 [2024-11-26 12:51:40.319088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.833 [2024-11-26 12:51:40.319137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:22.834 BaseBdev2 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.834 BaseBdev3_malloc 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.834 true 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.834 [2024-11-26 12:51:40.356714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:22.834 [2024-11-26 12:51:40.356800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.834 [2024-11-26 12:51:40.356828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:22.834 [2024-11-26 12:51:40.356840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.834 [2024-11-26 12:51:40.358923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.834 [2024-11-26 12:51:40.358959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:22.834 BaseBdev3 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.834 [2024-11-26 12:51:40.368748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.834 [2024-11-26 12:51:40.370534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.834 [2024-11-26 12:51:40.370612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.834 [2024-11-26 12:51:40.370797] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:22.834 [2024-11-26 12:51:40.370813] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.834 [2024-11-26 12:51:40.371038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:22.834 [2024-11-26 12:51:40.371182] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:22.834 [2024-11-26 12:51:40.371213] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:22.834 [2024-11-26 12:51:40.371340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.834 "name": "raid_bdev1", 00:08:22.834 "uuid": "11fe4b8c-d7ce-43c7-8441-afbcf8c7a491", 00:08:22.834 "strip_size_kb": 64, 00:08:22.834 "state": "online", 00:08:22.834 "raid_level": "raid0", 00:08:22.834 "superblock": true, 00:08:22.834 "num_base_bdevs": 3, 00:08:22.834 "num_base_bdevs_discovered": 3, 00:08:22.834 "num_base_bdevs_operational": 3, 00:08:22.834 "base_bdevs_list": [ 00:08:22.834 { 00:08:22.834 "name": "BaseBdev1", 00:08:22.834 "uuid": "86817fcc-c6df-5ab6-99f2-ae503d6a4a49", 00:08:22.834 "is_configured": true, 00:08:22.834 "data_offset": 2048, 00:08:22.834 "data_size": 63488 00:08:22.834 }, 00:08:22.834 { 00:08:22.834 "name": "BaseBdev2", 00:08:22.834 "uuid": "159c35f3-01a4-519a-b616-fa3c645276ee", 00:08:22.834 "is_configured": true, 00:08:22.834 "data_offset": 2048, 00:08:22.834 "data_size": 63488 00:08:22.834 }, 00:08:22.834 { 00:08:22.834 "name": "BaseBdev3", 00:08:22.834 "uuid": "470d6c63-7856-507a-bbea-1a8fbd9a07e6", 00:08:22.834 "is_configured": true, 00:08:22.834 "data_offset": 2048, 00:08:22.834 "data_size": 63488 00:08:22.834 } 00:08:22.834 ] 00:08:22.834 }' 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.834 12:51:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.404 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:23.404 12:51:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:23.404 [2024-11-26 12:51:40.892303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.344 "name": "raid_bdev1", 00:08:24.344 "uuid": "11fe4b8c-d7ce-43c7-8441-afbcf8c7a491", 00:08:24.344 "strip_size_kb": 64, 00:08:24.344 "state": "online", 00:08:24.344 "raid_level": "raid0", 00:08:24.344 "superblock": true, 00:08:24.344 "num_base_bdevs": 3, 00:08:24.344 "num_base_bdevs_discovered": 3, 00:08:24.344 "num_base_bdevs_operational": 3, 00:08:24.344 "base_bdevs_list": [ 00:08:24.344 { 00:08:24.344 "name": "BaseBdev1", 00:08:24.344 "uuid": "86817fcc-c6df-5ab6-99f2-ae503d6a4a49", 00:08:24.344 "is_configured": true, 00:08:24.344 "data_offset": 2048, 00:08:24.344 "data_size": 63488 00:08:24.344 }, 00:08:24.344 { 00:08:24.344 "name": "BaseBdev2", 00:08:24.344 "uuid": "159c35f3-01a4-519a-b616-fa3c645276ee", 00:08:24.344 "is_configured": true, 00:08:24.344 "data_offset": 2048, 00:08:24.344 "data_size": 63488 00:08:24.344 }, 00:08:24.344 { 00:08:24.344 "name": "BaseBdev3", 00:08:24.344 "uuid": "470d6c63-7856-507a-bbea-1a8fbd9a07e6", 00:08:24.344 "is_configured": true, 00:08:24.344 "data_offset": 2048, 00:08:24.344 "data_size": 63488 00:08:24.344 } 00:08:24.344 ] 00:08:24.344 }' 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.344 12:51:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.914 12:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.915 [2024-11-26 12:51:42.308026] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.915 [2024-11-26 12:51:42.308062] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.915 [2024-11-26 12:51:42.310464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.915 [2024-11-26 12:51:42.310530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.915 [2024-11-26 12:51:42.310563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.915 [2024-11-26 12:51:42.310575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:24.915 { 00:08:24.915 "results": [ 00:08:24.915 { 00:08:24.915 "job": "raid_bdev1", 00:08:24.915 "core_mask": "0x1", 00:08:24.915 "workload": "randrw", 00:08:24.915 "percentage": 50, 00:08:24.915 "status": "finished", 00:08:24.915 "queue_depth": 1, 00:08:24.915 "io_size": 131072, 00:08:24.915 "runtime": 1.416669, 00:08:24.915 "iops": 17223.50104364534, 00:08:24.915 "mibps": 2152.9376304556677, 00:08:24.915 "io_failed": 1, 00:08:24.915 "io_timeout": 0, 00:08:24.915 "avg_latency_us": 80.51251446670969, 00:08:24.915 "min_latency_us": 19.675109170305678, 00:08:24.915 "max_latency_us": 1345.0620087336245 00:08:24.915 } 00:08:24.915 ], 00:08:24.915 "core_count": 1 00:08:24.915 } 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76706 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76706 ']' 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76706 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76706 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76706' 00:08:24.915 killing process with pid 76706 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76706 00:08:24.915 [2024-11-26 12:51:42.356581] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.915 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76706 00:08:24.915 [2024-11-26 12:51:42.381977] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.175 12:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kPCpBgLYL3 00:08:25.175 12:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:25.175 12:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:25.175 12:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:25.175 12:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:25.176 12:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.176 12:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:25.176 12:51:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:25.176 00:08:25.176 real 0m3.325s 00:08:25.176 user 0m4.206s 00:08:25.176 sys 0m0.513s 00:08:25.176 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.176 12:51:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.176 ************************************ 00:08:25.176 END TEST raid_read_error_test 00:08:25.176 ************************************ 00:08:25.176 12:51:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:25.176 12:51:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:25.176 12:51:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.176 12:51:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.176 ************************************ 00:08:25.176 START TEST raid_write_error_test 00:08:25.176 ************************************ 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4C0P4o9S8T 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76839 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76839 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76839 ']' 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.176 12:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.176 [2024-11-26 12:51:42.799228] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:25.176 [2024-11-26 12:51:42.799417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76839 ] 00:08:25.436 [2024-11-26 12:51:42.950896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.436 [2024-11-26 12:51:42.994556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.436 [2024-11-26 12:51:43.036207] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.436 [2024-11-26 12:51:43.036325] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.007 BaseBdev1_malloc 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.007 true 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.007 [2024-11-26 12:51:43.650407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:26.007 [2024-11-26 12:51:43.650459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.007 [2024-11-26 12:51:43.650481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:26.007 [2024-11-26 12:51:43.650492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.007 [2024-11-26 12:51:43.652684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.007 [2024-11-26 12:51:43.652773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:26.007 BaseBdev1 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.007 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:26.008 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.008 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 BaseBdev2_malloc 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 true 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 [2024-11-26 12:51:43.711285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:26.268 [2024-11-26 12:51:43.711425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.268 [2024-11-26 12:51:43.711471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:26.268 [2024-11-26 12:51:43.711492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.268 [2024-11-26 12:51:43.714793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.268 [2024-11-26 12:51:43.714907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:26.268 BaseBdev2 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 BaseBdev3_malloc 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 true 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 [2024-11-26 12:51:43.751706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:26.268 [2024-11-26 12:51:43.751754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.268 [2024-11-26 12:51:43.751779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:26.268 [2024-11-26 12:51:43.751791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.268 [2024-11-26 12:51:43.753989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.268 [2024-11-26 12:51:43.754027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:26.268 BaseBdev3 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 [2024-11-26 12:51:43.763750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.268 [2024-11-26 12:51:43.765645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.268 [2024-11-26 12:51:43.765724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.268 [2024-11-26 12:51:43.765887] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:26.268 [2024-11-26 12:51:43.765902] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:26.268 [2024-11-26 12:51:43.766147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:26.268 [2024-11-26 12:51:43.766299] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:26.268 [2024-11-26 12:51:43.766311] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:26.268 [2024-11-26 12:51:43.766434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.268 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.268 "name": "raid_bdev1", 00:08:26.268 "uuid": "ac427991-43e1-4145-a0c4-2ab48194211d", 00:08:26.268 "strip_size_kb": 64, 00:08:26.268 "state": "online", 00:08:26.268 "raid_level": "raid0", 00:08:26.268 "superblock": true, 00:08:26.268 "num_base_bdevs": 3, 00:08:26.268 "num_base_bdevs_discovered": 3, 00:08:26.268 "num_base_bdevs_operational": 3, 00:08:26.268 "base_bdevs_list": [ 00:08:26.268 { 00:08:26.268 "name": "BaseBdev1", 00:08:26.268 "uuid": "045c1c33-30ec-5b0c-b64b-d8e6b9c862dd", 00:08:26.268 "is_configured": true, 00:08:26.268 "data_offset": 2048, 00:08:26.268 "data_size": 63488 00:08:26.268 }, 00:08:26.268 { 00:08:26.268 "name": "BaseBdev2", 00:08:26.268 "uuid": "67d196cb-f996-5d67-a7c9-39fee8d9325d", 00:08:26.268 "is_configured": true, 00:08:26.268 "data_offset": 2048, 00:08:26.268 "data_size": 63488 00:08:26.268 }, 00:08:26.268 { 00:08:26.268 "name": "BaseBdev3", 00:08:26.268 "uuid": "4c9b12d5-a9ef-5e7f-a0dd-585b76f34844", 00:08:26.269 "is_configured": true, 00:08:26.269 "data_offset": 2048, 00:08:26.269 "data_size": 63488 00:08:26.269 } 00:08:26.269 ] 00:08:26.269 }' 00:08:26.269 12:51:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.269 12:51:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.528 12:51:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:26.528 12:51:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:26.788 [2024-11-26 12:51:44.271233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.749 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.750 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.750 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.750 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.750 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.750 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.750 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.750 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.750 "name": "raid_bdev1", 00:08:27.750 "uuid": "ac427991-43e1-4145-a0c4-2ab48194211d", 00:08:27.750 "strip_size_kb": 64, 00:08:27.750 "state": "online", 00:08:27.750 "raid_level": "raid0", 00:08:27.750 "superblock": true, 00:08:27.750 "num_base_bdevs": 3, 00:08:27.750 "num_base_bdevs_discovered": 3, 00:08:27.750 "num_base_bdevs_operational": 3, 00:08:27.750 "base_bdevs_list": [ 00:08:27.750 { 00:08:27.750 "name": "BaseBdev1", 00:08:27.750 "uuid": "045c1c33-30ec-5b0c-b64b-d8e6b9c862dd", 00:08:27.750 "is_configured": true, 00:08:27.750 "data_offset": 2048, 00:08:27.750 "data_size": 63488 00:08:27.750 }, 00:08:27.750 { 00:08:27.750 "name": "BaseBdev2", 00:08:27.750 "uuid": "67d196cb-f996-5d67-a7c9-39fee8d9325d", 00:08:27.750 "is_configured": true, 00:08:27.750 "data_offset": 2048, 00:08:27.750 "data_size": 63488 00:08:27.750 }, 00:08:27.750 { 00:08:27.750 "name": "BaseBdev3", 00:08:27.750 "uuid": "4c9b12d5-a9ef-5e7f-a0dd-585b76f34844", 00:08:27.750 "is_configured": true, 00:08:27.750 "data_offset": 2048, 00:08:27.750 "data_size": 63488 00:08:27.750 } 00:08:27.750 ] 00:08:27.750 }' 00:08:27.750 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.750 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.010 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.010 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.010 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.010 [2024-11-26 12:51:45.674931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.010 [2024-11-26 12:51:45.674968] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.010 [2024-11-26 12:51:45.677446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.010 [2024-11-26 12:51:45.677498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.010 [2024-11-26 12:51:45.677531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.010 [2024-11-26 12:51:45.677541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:28.010 { 00:08:28.010 "results": [ 00:08:28.010 { 00:08:28.010 "job": "raid_bdev1", 00:08:28.010 "core_mask": "0x1", 00:08:28.010 "workload": "randrw", 00:08:28.010 "percentage": 50, 00:08:28.010 "status": "finished", 00:08:28.010 "queue_depth": 1, 00:08:28.010 "io_size": 131072, 00:08:28.010 "runtime": 1.404641, 00:08:28.010 "iops": 17545.408399726337, 00:08:28.010 "mibps": 2193.176049965792, 00:08:28.010 "io_failed": 1, 00:08:28.010 "io_timeout": 0, 00:08:28.010 "avg_latency_us": 79.08653673129416, 00:08:28.010 "min_latency_us": 24.258515283842794, 00:08:28.010 "max_latency_us": 1366.5257641921398 00:08:28.010 } 00:08:28.010 ], 00:08:28.010 "core_count": 1 00:08:28.010 } 00:08:28.010 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.010 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76839 00:08:28.010 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76839 ']' 00:08:28.010 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76839 00:08:28.010 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:28.270 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.270 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76839 00:08:28.270 killing process with pid 76839 00:08:28.270 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.270 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.270 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76839' 00:08:28.270 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76839 00:08:28.270 [2024-11-26 12:51:45.712701] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.270 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76839 00:08:28.270 [2024-11-26 12:51:45.737545] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.540 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4C0P4o9S8T 00:08:28.540 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:28.540 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:28.540 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:28.540 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:28.540 ************************************ 00:08:28.540 END TEST raid_write_error_test 00:08:28.541 ************************************ 00:08:28.541 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.541 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:28.541 12:51:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:28.541 00:08:28.541 real 0m3.284s 00:08:28.541 user 0m4.111s 00:08:28.541 sys 0m0.524s 00:08:28.541 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.541 12:51:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.541 12:51:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:28.541 12:51:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:28.541 12:51:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:28.541 12:51:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.541 12:51:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.541 ************************************ 00:08:28.541 START TEST raid_state_function_test 00:08:28.541 ************************************ 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76967 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76967' 00:08:28.541 Process raid pid: 76967 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76967 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76967 ']' 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.541 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.541 [2024-11-26 12:51:46.154444] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:28.541 [2024-11-26 12:51:46.154621] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.888 [2024-11-26 12:51:46.318258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.888 [2024-11-26 12:51:46.363603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.888 [2024-11-26 12:51:46.405337] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.888 [2024-11-26 12:51:46.405387] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.457 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.457 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:29.457 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.457 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.457 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.457 [2024-11-26 12:51:46.986866] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.457 [2024-11-26 12:51:46.986917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.458 [2024-11-26 12:51:46.986936] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.458 [2024-11-26 12:51:46.986948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.458 [2024-11-26 12:51:46.986955] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:29.458 [2024-11-26 12:51:46.986971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.458 12:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.458 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.458 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.458 "name": "Existed_Raid", 00:08:29.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.458 "strip_size_kb": 64, 00:08:29.458 "state": "configuring", 00:08:29.458 "raid_level": "concat", 00:08:29.458 "superblock": false, 00:08:29.458 "num_base_bdevs": 3, 00:08:29.458 "num_base_bdevs_discovered": 0, 00:08:29.458 "num_base_bdevs_operational": 3, 00:08:29.458 "base_bdevs_list": [ 00:08:29.458 { 00:08:29.458 "name": "BaseBdev1", 00:08:29.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.458 "is_configured": false, 00:08:29.458 "data_offset": 0, 00:08:29.458 "data_size": 0 00:08:29.458 }, 00:08:29.458 { 00:08:29.458 "name": "BaseBdev2", 00:08:29.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.458 "is_configured": false, 00:08:29.458 "data_offset": 0, 00:08:29.458 "data_size": 0 00:08:29.458 }, 00:08:29.458 { 00:08:29.458 "name": "BaseBdev3", 00:08:29.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.458 "is_configured": false, 00:08:29.458 "data_offset": 0, 00:08:29.458 "data_size": 0 00:08:29.458 } 00:08:29.458 ] 00:08:29.458 }' 00:08:29.458 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.458 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 [2024-11-26 12:51:47.410053] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.028 [2024-11-26 12:51:47.410138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 [2024-11-26 12:51:47.422066] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.028 [2024-11-26 12:51:47.422162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.028 [2024-11-26 12:51:47.422221] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.028 [2024-11-26 12:51:47.422260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.028 [2024-11-26 12:51:47.422287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.028 [2024-11-26 12:51:47.422319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 [2024-11-26 12:51:47.442864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.028 BaseBdev1 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.028 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 [ 00:08:30.028 { 00:08:30.028 "name": "BaseBdev1", 00:08:30.028 "aliases": [ 00:08:30.028 "aa2ed919-bff0-4177-a7fa-39251b577db3" 00:08:30.028 ], 00:08:30.028 "product_name": "Malloc disk", 00:08:30.028 "block_size": 512, 00:08:30.028 "num_blocks": 65536, 00:08:30.028 "uuid": "aa2ed919-bff0-4177-a7fa-39251b577db3", 00:08:30.028 "assigned_rate_limits": { 00:08:30.028 "rw_ios_per_sec": 0, 00:08:30.028 "rw_mbytes_per_sec": 0, 00:08:30.028 "r_mbytes_per_sec": 0, 00:08:30.028 "w_mbytes_per_sec": 0 00:08:30.028 }, 00:08:30.028 "claimed": true, 00:08:30.028 "claim_type": "exclusive_write", 00:08:30.028 "zoned": false, 00:08:30.028 "supported_io_types": { 00:08:30.028 "read": true, 00:08:30.028 "write": true, 00:08:30.028 "unmap": true, 00:08:30.028 "flush": true, 00:08:30.028 "reset": true, 00:08:30.028 "nvme_admin": false, 00:08:30.028 "nvme_io": false, 00:08:30.028 "nvme_io_md": false, 00:08:30.028 "write_zeroes": true, 00:08:30.028 "zcopy": true, 00:08:30.028 "get_zone_info": false, 00:08:30.028 "zone_management": false, 00:08:30.028 "zone_append": false, 00:08:30.028 "compare": false, 00:08:30.028 "compare_and_write": false, 00:08:30.028 "abort": true, 00:08:30.028 "seek_hole": false, 00:08:30.028 "seek_data": false, 00:08:30.028 "copy": true, 00:08:30.028 "nvme_iov_md": false 00:08:30.028 }, 00:08:30.028 "memory_domains": [ 00:08:30.028 { 00:08:30.028 "dma_device_id": "system", 00:08:30.029 "dma_device_type": 1 00:08:30.029 }, 00:08:30.029 { 00:08:30.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.029 "dma_device_type": 2 00:08:30.029 } 00:08:30.029 ], 00:08:30.029 "driver_specific": {} 00:08:30.029 } 00:08:30.029 ] 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.029 "name": "Existed_Raid", 00:08:30.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.029 "strip_size_kb": 64, 00:08:30.029 "state": "configuring", 00:08:30.029 "raid_level": "concat", 00:08:30.029 "superblock": false, 00:08:30.029 "num_base_bdevs": 3, 00:08:30.029 "num_base_bdevs_discovered": 1, 00:08:30.029 "num_base_bdevs_operational": 3, 00:08:30.029 "base_bdevs_list": [ 00:08:30.029 { 00:08:30.029 "name": "BaseBdev1", 00:08:30.029 "uuid": "aa2ed919-bff0-4177-a7fa-39251b577db3", 00:08:30.029 "is_configured": true, 00:08:30.029 "data_offset": 0, 00:08:30.029 "data_size": 65536 00:08:30.029 }, 00:08:30.029 { 00:08:30.029 "name": "BaseBdev2", 00:08:30.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.029 "is_configured": false, 00:08:30.029 "data_offset": 0, 00:08:30.029 "data_size": 0 00:08:30.029 }, 00:08:30.029 { 00:08:30.029 "name": "BaseBdev3", 00:08:30.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.029 "is_configured": false, 00:08:30.029 "data_offset": 0, 00:08:30.029 "data_size": 0 00:08:30.029 } 00:08:30.029 ] 00:08:30.029 }' 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.029 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.289 [2024-11-26 12:51:47.918084] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.289 [2024-11-26 12:51:47.918182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.289 [2024-11-26 12:51:47.926112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.289 [2024-11-26 12:51:47.928028] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.289 [2024-11-26 12:51:47.928070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.289 [2024-11-26 12:51:47.928083] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.289 [2024-11-26 12:51:47.928097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.289 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.549 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.549 "name": "Existed_Raid", 00:08:30.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.549 "strip_size_kb": 64, 00:08:30.549 "state": "configuring", 00:08:30.549 "raid_level": "concat", 00:08:30.549 "superblock": false, 00:08:30.549 "num_base_bdevs": 3, 00:08:30.549 "num_base_bdevs_discovered": 1, 00:08:30.549 "num_base_bdevs_operational": 3, 00:08:30.549 "base_bdevs_list": [ 00:08:30.549 { 00:08:30.549 "name": "BaseBdev1", 00:08:30.549 "uuid": "aa2ed919-bff0-4177-a7fa-39251b577db3", 00:08:30.549 "is_configured": true, 00:08:30.549 "data_offset": 0, 00:08:30.549 "data_size": 65536 00:08:30.549 }, 00:08:30.549 { 00:08:30.549 "name": "BaseBdev2", 00:08:30.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.549 "is_configured": false, 00:08:30.549 "data_offset": 0, 00:08:30.549 "data_size": 0 00:08:30.549 }, 00:08:30.549 { 00:08:30.549 "name": "BaseBdev3", 00:08:30.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.549 "is_configured": false, 00:08:30.549 "data_offset": 0, 00:08:30.549 "data_size": 0 00:08:30.549 } 00:08:30.549 ] 00:08:30.549 }' 00:08:30.549 12:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.549 12:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.809 [2024-11-26 12:51:48.356912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.809 BaseBdev2 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.809 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.810 [ 00:08:30.810 { 00:08:30.810 "name": "BaseBdev2", 00:08:30.810 "aliases": [ 00:08:30.810 "b6e448ce-290e-442e-b1e6-24f11820ffce" 00:08:30.810 ], 00:08:30.810 "product_name": "Malloc disk", 00:08:30.810 "block_size": 512, 00:08:30.810 "num_blocks": 65536, 00:08:30.810 "uuid": "b6e448ce-290e-442e-b1e6-24f11820ffce", 00:08:30.810 "assigned_rate_limits": { 00:08:30.810 "rw_ios_per_sec": 0, 00:08:30.810 "rw_mbytes_per_sec": 0, 00:08:30.810 "r_mbytes_per_sec": 0, 00:08:30.810 "w_mbytes_per_sec": 0 00:08:30.810 }, 00:08:30.810 "claimed": true, 00:08:30.810 "claim_type": "exclusive_write", 00:08:30.810 "zoned": false, 00:08:30.810 "supported_io_types": { 00:08:30.810 "read": true, 00:08:30.810 "write": true, 00:08:30.810 "unmap": true, 00:08:30.810 "flush": true, 00:08:30.810 "reset": true, 00:08:30.810 "nvme_admin": false, 00:08:30.810 "nvme_io": false, 00:08:30.810 "nvme_io_md": false, 00:08:30.810 "write_zeroes": true, 00:08:30.810 "zcopy": true, 00:08:30.810 "get_zone_info": false, 00:08:30.810 "zone_management": false, 00:08:30.810 "zone_append": false, 00:08:30.810 "compare": false, 00:08:30.810 "compare_and_write": false, 00:08:30.810 "abort": true, 00:08:30.810 "seek_hole": false, 00:08:30.810 "seek_data": false, 00:08:30.810 "copy": true, 00:08:30.810 "nvme_iov_md": false 00:08:30.810 }, 00:08:30.810 "memory_domains": [ 00:08:30.810 { 00:08:30.810 "dma_device_id": "system", 00:08:30.810 "dma_device_type": 1 00:08:30.810 }, 00:08:30.810 { 00:08:30.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.810 "dma_device_type": 2 00:08:30.810 } 00:08:30.810 ], 00:08:30.810 "driver_specific": {} 00:08:30.810 } 00:08:30.810 ] 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.810 "name": "Existed_Raid", 00:08:30.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.810 "strip_size_kb": 64, 00:08:30.810 "state": "configuring", 00:08:30.810 "raid_level": "concat", 00:08:30.810 "superblock": false, 00:08:30.810 "num_base_bdevs": 3, 00:08:30.810 "num_base_bdevs_discovered": 2, 00:08:30.810 "num_base_bdevs_operational": 3, 00:08:30.810 "base_bdevs_list": [ 00:08:30.810 { 00:08:30.810 "name": "BaseBdev1", 00:08:30.810 "uuid": "aa2ed919-bff0-4177-a7fa-39251b577db3", 00:08:30.810 "is_configured": true, 00:08:30.810 "data_offset": 0, 00:08:30.810 "data_size": 65536 00:08:30.810 }, 00:08:30.810 { 00:08:30.810 "name": "BaseBdev2", 00:08:30.810 "uuid": "b6e448ce-290e-442e-b1e6-24f11820ffce", 00:08:30.810 "is_configured": true, 00:08:30.810 "data_offset": 0, 00:08:30.810 "data_size": 65536 00:08:30.810 }, 00:08:30.810 { 00:08:30.810 "name": "BaseBdev3", 00:08:30.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.810 "is_configured": false, 00:08:30.810 "data_offset": 0, 00:08:30.810 "data_size": 0 00:08:30.810 } 00:08:30.810 ] 00:08:30.810 }' 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.810 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.380 [2024-11-26 12:51:48.815228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.380 [2024-11-26 12:51:48.815267] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:31.380 [2024-11-26 12:51:48.815277] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:31.380 [2024-11-26 12:51:48.815592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:31.380 [2024-11-26 12:51:48.815728] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:31.380 [2024-11-26 12:51:48.815738] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:31.380 [2024-11-26 12:51:48.815926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.380 BaseBdev3 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.380 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.380 [ 00:08:31.380 { 00:08:31.380 "name": "BaseBdev3", 00:08:31.380 "aliases": [ 00:08:31.380 "7c6353d6-2e15-432a-9e50-760cb8b65baa" 00:08:31.380 ], 00:08:31.380 "product_name": "Malloc disk", 00:08:31.380 "block_size": 512, 00:08:31.380 "num_blocks": 65536, 00:08:31.381 "uuid": "7c6353d6-2e15-432a-9e50-760cb8b65baa", 00:08:31.381 "assigned_rate_limits": { 00:08:31.381 "rw_ios_per_sec": 0, 00:08:31.381 "rw_mbytes_per_sec": 0, 00:08:31.381 "r_mbytes_per_sec": 0, 00:08:31.381 "w_mbytes_per_sec": 0 00:08:31.381 }, 00:08:31.381 "claimed": true, 00:08:31.381 "claim_type": "exclusive_write", 00:08:31.381 "zoned": false, 00:08:31.381 "supported_io_types": { 00:08:31.381 "read": true, 00:08:31.381 "write": true, 00:08:31.381 "unmap": true, 00:08:31.381 "flush": true, 00:08:31.381 "reset": true, 00:08:31.381 "nvme_admin": false, 00:08:31.381 "nvme_io": false, 00:08:31.381 "nvme_io_md": false, 00:08:31.381 "write_zeroes": true, 00:08:31.381 "zcopy": true, 00:08:31.381 "get_zone_info": false, 00:08:31.381 "zone_management": false, 00:08:31.381 "zone_append": false, 00:08:31.381 "compare": false, 00:08:31.381 "compare_and_write": false, 00:08:31.381 "abort": true, 00:08:31.381 "seek_hole": false, 00:08:31.381 "seek_data": false, 00:08:31.381 "copy": true, 00:08:31.381 "nvme_iov_md": false 00:08:31.381 }, 00:08:31.381 "memory_domains": [ 00:08:31.381 { 00:08:31.381 "dma_device_id": "system", 00:08:31.381 "dma_device_type": 1 00:08:31.381 }, 00:08:31.381 { 00:08:31.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.381 "dma_device_type": 2 00:08:31.381 } 00:08:31.381 ], 00:08:31.381 "driver_specific": {} 00:08:31.381 } 00:08:31.381 ] 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.381 "name": "Existed_Raid", 00:08:31.381 "uuid": "b03dae0e-9be8-4e75-81eb-8a06ce092422", 00:08:31.381 "strip_size_kb": 64, 00:08:31.381 "state": "online", 00:08:31.381 "raid_level": "concat", 00:08:31.381 "superblock": false, 00:08:31.381 "num_base_bdevs": 3, 00:08:31.381 "num_base_bdevs_discovered": 3, 00:08:31.381 "num_base_bdevs_operational": 3, 00:08:31.381 "base_bdevs_list": [ 00:08:31.381 { 00:08:31.381 "name": "BaseBdev1", 00:08:31.381 "uuid": "aa2ed919-bff0-4177-a7fa-39251b577db3", 00:08:31.381 "is_configured": true, 00:08:31.381 "data_offset": 0, 00:08:31.381 "data_size": 65536 00:08:31.381 }, 00:08:31.381 { 00:08:31.381 "name": "BaseBdev2", 00:08:31.381 "uuid": "b6e448ce-290e-442e-b1e6-24f11820ffce", 00:08:31.381 "is_configured": true, 00:08:31.381 "data_offset": 0, 00:08:31.381 "data_size": 65536 00:08:31.381 }, 00:08:31.381 { 00:08:31.381 "name": "BaseBdev3", 00:08:31.381 "uuid": "7c6353d6-2e15-432a-9e50-760cb8b65baa", 00:08:31.381 "is_configured": true, 00:08:31.381 "data_offset": 0, 00:08:31.381 "data_size": 65536 00:08:31.381 } 00:08:31.381 ] 00:08:31.381 }' 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.381 12:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.641 [2024-11-26 12:51:49.286692] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.641 "name": "Existed_Raid", 00:08:31.641 "aliases": [ 00:08:31.641 "b03dae0e-9be8-4e75-81eb-8a06ce092422" 00:08:31.641 ], 00:08:31.641 "product_name": "Raid Volume", 00:08:31.641 "block_size": 512, 00:08:31.641 "num_blocks": 196608, 00:08:31.641 "uuid": "b03dae0e-9be8-4e75-81eb-8a06ce092422", 00:08:31.641 "assigned_rate_limits": { 00:08:31.641 "rw_ios_per_sec": 0, 00:08:31.641 "rw_mbytes_per_sec": 0, 00:08:31.641 "r_mbytes_per_sec": 0, 00:08:31.641 "w_mbytes_per_sec": 0 00:08:31.641 }, 00:08:31.641 "claimed": false, 00:08:31.641 "zoned": false, 00:08:31.641 "supported_io_types": { 00:08:31.641 "read": true, 00:08:31.641 "write": true, 00:08:31.641 "unmap": true, 00:08:31.641 "flush": true, 00:08:31.641 "reset": true, 00:08:31.641 "nvme_admin": false, 00:08:31.641 "nvme_io": false, 00:08:31.641 "nvme_io_md": false, 00:08:31.641 "write_zeroes": true, 00:08:31.641 "zcopy": false, 00:08:31.641 "get_zone_info": false, 00:08:31.641 "zone_management": false, 00:08:31.641 "zone_append": false, 00:08:31.641 "compare": false, 00:08:31.641 "compare_and_write": false, 00:08:31.641 "abort": false, 00:08:31.641 "seek_hole": false, 00:08:31.641 "seek_data": false, 00:08:31.641 "copy": false, 00:08:31.641 "nvme_iov_md": false 00:08:31.641 }, 00:08:31.641 "memory_domains": [ 00:08:31.641 { 00:08:31.641 "dma_device_id": "system", 00:08:31.641 "dma_device_type": 1 00:08:31.641 }, 00:08:31.641 { 00:08:31.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.641 "dma_device_type": 2 00:08:31.641 }, 00:08:31.641 { 00:08:31.641 "dma_device_id": "system", 00:08:31.641 "dma_device_type": 1 00:08:31.641 }, 00:08:31.641 { 00:08:31.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.641 "dma_device_type": 2 00:08:31.641 }, 00:08:31.641 { 00:08:31.641 "dma_device_id": "system", 00:08:31.641 "dma_device_type": 1 00:08:31.641 }, 00:08:31.641 { 00:08:31.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.641 "dma_device_type": 2 00:08:31.641 } 00:08:31.641 ], 00:08:31.641 "driver_specific": { 00:08:31.641 "raid": { 00:08:31.641 "uuid": "b03dae0e-9be8-4e75-81eb-8a06ce092422", 00:08:31.641 "strip_size_kb": 64, 00:08:31.641 "state": "online", 00:08:31.641 "raid_level": "concat", 00:08:31.641 "superblock": false, 00:08:31.641 "num_base_bdevs": 3, 00:08:31.641 "num_base_bdevs_discovered": 3, 00:08:31.641 "num_base_bdevs_operational": 3, 00:08:31.641 "base_bdevs_list": [ 00:08:31.641 { 00:08:31.641 "name": "BaseBdev1", 00:08:31.641 "uuid": "aa2ed919-bff0-4177-a7fa-39251b577db3", 00:08:31.641 "is_configured": true, 00:08:31.641 "data_offset": 0, 00:08:31.641 "data_size": 65536 00:08:31.641 }, 00:08:31.641 { 00:08:31.641 "name": "BaseBdev2", 00:08:31.641 "uuid": "b6e448ce-290e-442e-b1e6-24f11820ffce", 00:08:31.641 "is_configured": true, 00:08:31.641 "data_offset": 0, 00:08:31.641 "data_size": 65536 00:08:31.641 }, 00:08:31.641 { 00:08:31.641 "name": "BaseBdev3", 00:08:31.641 "uuid": "7c6353d6-2e15-432a-9e50-760cb8b65baa", 00:08:31.641 "is_configured": true, 00:08:31.641 "data_offset": 0, 00:08:31.641 "data_size": 65536 00:08:31.641 } 00:08:31.641 ] 00:08:31.641 } 00:08:31.641 } 00:08:31.641 }' 00:08:31.641 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:31.901 BaseBdev2 00:08:31.901 BaseBdev3' 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.901 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.902 [2024-11-26 12:51:49.546042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.902 [2024-11-26 12:51:49.546108] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.902 [2024-11-26 12:51:49.546208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.902 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.162 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.162 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.162 "name": "Existed_Raid", 00:08:32.162 "uuid": "b03dae0e-9be8-4e75-81eb-8a06ce092422", 00:08:32.162 "strip_size_kb": 64, 00:08:32.162 "state": "offline", 00:08:32.162 "raid_level": "concat", 00:08:32.162 "superblock": false, 00:08:32.162 "num_base_bdevs": 3, 00:08:32.162 "num_base_bdevs_discovered": 2, 00:08:32.162 "num_base_bdevs_operational": 2, 00:08:32.162 "base_bdevs_list": [ 00:08:32.162 { 00:08:32.162 "name": null, 00:08:32.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.162 "is_configured": false, 00:08:32.162 "data_offset": 0, 00:08:32.162 "data_size": 65536 00:08:32.162 }, 00:08:32.162 { 00:08:32.162 "name": "BaseBdev2", 00:08:32.162 "uuid": "b6e448ce-290e-442e-b1e6-24f11820ffce", 00:08:32.162 "is_configured": true, 00:08:32.162 "data_offset": 0, 00:08:32.162 "data_size": 65536 00:08:32.162 }, 00:08:32.162 { 00:08:32.162 "name": "BaseBdev3", 00:08:32.162 "uuid": "7c6353d6-2e15-432a-9e50-760cb8b65baa", 00:08:32.162 "is_configured": true, 00:08:32.162 "data_offset": 0, 00:08:32.162 "data_size": 65536 00:08:32.162 } 00:08:32.162 ] 00:08:32.162 }' 00:08:32.162 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.162 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.422 [2024-11-26 12:51:49.988739] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.422 12:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.423 [2024-11-26 12:51:50.039745] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:32.423 [2024-11-26 12:51:50.039791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.423 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.683 BaseBdev2 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:32.683 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.684 [ 00:08:32.684 { 00:08:32.684 "name": "BaseBdev2", 00:08:32.684 "aliases": [ 00:08:32.684 "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4" 00:08:32.684 ], 00:08:32.684 "product_name": "Malloc disk", 00:08:32.684 "block_size": 512, 00:08:32.684 "num_blocks": 65536, 00:08:32.684 "uuid": "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4", 00:08:32.684 "assigned_rate_limits": { 00:08:32.684 "rw_ios_per_sec": 0, 00:08:32.684 "rw_mbytes_per_sec": 0, 00:08:32.684 "r_mbytes_per_sec": 0, 00:08:32.684 "w_mbytes_per_sec": 0 00:08:32.684 }, 00:08:32.684 "claimed": false, 00:08:32.684 "zoned": false, 00:08:32.684 "supported_io_types": { 00:08:32.684 "read": true, 00:08:32.684 "write": true, 00:08:32.684 "unmap": true, 00:08:32.684 "flush": true, 00:08:32.684 "reset": true, 00:08:32.684 "nvme_admin": false, 00:08:32.684 "nvme_io": false, 00:08:32.684 "nvme_io_md": false, 00:08:32.684 "write_zeroes": true, 00:08:32.684 "zcopy": true, 00:08:32.684 "get_zone_info": false, 00:08:32.684 "zone_management": false, 00:08:32.684 "zone_append": false, 00:08:32.684 "compare": false, 00:08:32.684 "compare_and_write": false, 00:08:32.684 "abort": true, 00:08:32.684 "seek_hole": false, 00:08:32.684 "seek_data": false, 00:08:32.684 "copy": true, 00:08:32.684 "nvme_iov_md": false 00:08:32.684 }, 00:08:32.684 "memory_domains": [ 00:08:32.684 { 00:08:32.684 "dma_device_id": "system", 00:08:32.684 "dma_device_type": 1 00:08:32.684 }, 00:08:32.684 { 00:08:32.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.684 "dma_device_type": 2 00:08:32.684 } 00:08:32.684 ], 00:08:32.684 "driver_specific": {} 00:08:32.684 } 00:08:32.684 ] 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.684 BaseBdev3 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.684 [ 00:08:32.684 { 00:08:32.684 "name": "BaseBdev3", 00:08:32.684 "aliases": [ 00:08:32.684 "52bde137-734a-4960-ad80-100561b6988d" 00:08:32.684 ], 00:08:32.684 "product_name": "Malloc disk", 00:08:32.684 "block_size": 512, 00:08:32.684 "num_blocks": 65536, 00:08:32.684 "uuid": "52bde137-734a-4960-ad80-100561b6988d", 00:08:32.684 "assigned_rate_limits": { 00:08:32.684 "rw_ios_per_sec": 0, 00:08:32.684 "rw_mbytes_per_sec": 0, 00:08:32.684 "r_mbytes_per_sec": 0, 00:08:32.684 "w_mbytes_per_sec": 0 00:08:32.684 }, 00:08:32.684 "claimed": false, 00:08:32.684 "zoned": false, 00:08:32.684 "supported_io_types": { 00:08:32.684 "read": true, 00:08:32.684 "write": true, 00:08:32.684 "unmap": true, 00:08:32.684 "flush": true, 00:08:32.684 "reset": true, 00:08:32.684 "nvme_admin": false, 00:08:32.684 "nvme_io": false, 00:08:32.684 "nvme_io_md": false, 00:08:32.684 "write_zeroes": true, 00:08:32.684 "zcopy": true, 00:08:32.684 "get_zone_info": false, 00:08:32.684 "zone_management": false, 00:08:32.684 "zone_append": false, 00:08:32.684 "compare": false, 00:08:32.684 "compare_and_write": false, 00:08:32.684 "abort": true, 00:08:32.684 "seek_hole": false, 00:08:32.684 "seek_data": false, 00:08:32.684 "copy": true, 00:08:32.684 "nvme_iov_md": false 00:08:32.684 }, 00:08:32.684 "memory_domains": [ 00:08:32.684 { 00:08:32.684 "dma_device_id": "system", 00:08:32.684 "dma_device_type": 1 00:08:32.684 }, 00:08:32.684 { 00:08:32.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.684 "dma_device_type": 2 00:08:32.684 } 00:08:32.684 ], 00:08:32.684 "driver_specific": {} 00:08:32.684 } 00:08:32.684 ] 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.684 [2024-11-26 12:51:50.203382] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.684 [2024-11-26 12:51:50.203465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.684 [2024-11-26 12:51:50.203525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.684 [2024-11-26 12:51:50.205527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.684 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.685 "name": "Existed_Raid", 00:08:32.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.685 "strip_size_kb": 64, 00:08:32.685 "state": "configuring", 00:08:32.685 "raid_level": "concat", 00:08:32.685 "superblock": false, 00:08:32.685 "num_base_bdevs": 3, 00:08:32.685 "num_base_bdevs_discovered": 2, 00:08:32.685 "num_base_bdevs_operational": 3, 00:08:32.685 "base_bdevs_list": [ 00:08:32.685 { 00:08:32.685 "name": "BaseBdev1", 00:08:32.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.685 "is_configured": false, 00:08:32.685 "data_offset": 0, 00:08:32.685 "data_size": 0 00:08:32.685 }, 00:08:32.685 { 00:08:32.685 "name": "BaseBdev2", 00:08:32.685 "uuid": "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4", 00:08:32.685 "is_configured": true, 00:08:32.685 "data_offset": 0, 00:08:32.685 "data_size": 65536 00:08:32.685 }, 00:08:32.685 { 00:08:32.685 "name": "BaseBdev3", 00:08:32.685 "uuid": "52bde137-734a-4960-ad80-100561b6988d", 00:08:32.685 "is_configured": true, 00:08:32.685 "data_offset": 0, 00:08:32.685 "data_size": 65536 00:08:32.685 } 00:08:32.685 ] 00:08:32.685 }' 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.685 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.254 [2024-11-26 12:51:50.638682] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.254 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.254 "name": "Existed_Raid", 00:08:33.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.254 "strip_size_kb": 64, 00:08:33.254 "state": "configuring", 00:08:33.254 "raid_level": "concat", 00:08:33.254 "superblock": false, 00:08:33.254 "num_base_bdevs": 3, 00:08:33.254 "num_base_bdevs_discovered": 1, 00:08:33.254 "num_base_bdevs_operational": 3, 00:08:33.255 "base_bdevs_list": [ 00:08:33.255 { 00:08:33.255 "name": "BaseBdev1", 00:08:33.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.255 "is_configured": false, 00:08:33.255 "data_offset": 0, 00:08:33.255 "data_size": 0 00:08:33.255 }, 00:08:33.255 { 00:08:33.255 "name": null, 00:08:33.255 "uuid": "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4", 00:08:33.255 "is_configured": false, 00:08:33.255 "data_offset": 0, 00:08:33.255 "data_size": 65536 00:08:33.255 }, 00:08:33.255 { 00:08:33.255 "name": "BaseBdev3", 00:08:33.255 "uuid": "52bde137-734a-4960-ad80-100561b6988d", 00:08:33.255 "is_configured": true, 00:08:33.255 "data_offset": 0, 00:08:33.255 "data_size": 65536 00:08:33.255 } 00:08:33.255 ] 00:08:33.255 }' 00:08:33.255 12:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.255 12:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 [2024-11-26 12:51:51.096827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.515 BaseBdev1 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 [ 00:08:33.515 { 00:08:33.515 "name": "BaseBdev1", 00:08:33.515 "aliases": [ 00:08:33.515 "8745ca43-a9fb-4e4d-b46b-991d2d51c892" 00:08:33.515 ], 00:08:33.515 "product_name": "Malloc disk", 00:08:33.515 "block_size": 512, 00:08:33.515 "num_blocks": 65536, 00:08:33.515 "uuid": "8745ca43-a9fb-4e4d-b46b-991d2d51c892", 00:08:33.515 "assigned_rate_limits": { 00:08:33.515 "rw_ios_per_sec": 0, 00:08:33.515 "rw_mbytes_per_sec": 0, 00:08:33.515 "r_mbytes_per_sec": 0, 00:08:33.515 "w_mbytes_per_sec": 0 00:08:33.515 }, 00:08:33.515 "claimed": true, 00:08:33.515 "claim_type": "exclusive_write", 00:08:33.515 "zoned": false, 00:08:33.515 "supported_io_types": { 00:08:33.515 "read": true, 00:08:33.515 "write": true, 00:08:33.515 "unmap": true, 00:08:33.515 "flush": true, 00:08:33.515 "reset": true, 00:08:33.515 "nvme_admin": false, 00:08:33.515 "nvme_io": false, 00:08:33.515 "nvme_io_md": false, 00:08:33.515 "write_zeroes": true, 00:08:33.515 "zcopy": true, 00:08:33.515 "get_zone_info": false, 00:08:33.515 "zone_management": false, 00:08:33.515 "zone_append": false, 00:08:33.515 "compare": false, 00:08:33.515 "compare_and_write": false, 00:08:33.515 "abort": true, 00:08:33.515 "seek_hole": false, 00:08:33.515 "seek_data": false, 00:08:33.515 "copy": true, 00:08:33.515 "nvme_iov_md": false 00:08:33.515 }, 00:08:33.515 "memory_domains": [ 00:08:33.515 { 00:08:33.515 "dma_device_id": "system", 00:08:33.515 "dma_device_type": 1 00:08:33.515 }, 00:08:33.515 { 00:08:33.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.515 "dma_device_type": 2 00:08:33.515 } 00:08:33.515 ], 00:08:33.515 "driver_specific": {} 00:08:33.515 } 00:08:33.515 ] 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.515 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.515 "name": "Existed_Raid", 00:08:33.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.516 "strip_size_kb": 64, 00:08:33.516 "state": "configuring", 00:08:33.516 "raid_level": "concat", 00:08:33.516 "superblock": false, 00:08:33.516 "num_base_bdevs": 3, 00:08:33.516 "num_base_bdevs_discovered": 2, 00:08:33.516 "num_base_bdevs_operational": 3, 00:08:33.516 "base_bdevs_list": [ 00:08:33.516 { 00:08:33.516 "name": "BaseBdev1", 00:08:33.516 "uuid": "8745ca43-a9fb-4e4d-b46b-991d2d51c892", 00:08:33.516 "is_configured": true, 00:08:33.516 "data_offset": 0, 00:08:33.516 "data_size": 65536 00:08:33.516 }, 00:08:33.516 { 00:08:33.516 "name": null, 00:08:33.516 "uuid": "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4", 00:08:33.516 "is_configured": false, 00:08:33.516 "data_offset": 0, 00:08:33.516 "data_size": 65536 00:08:33.516 }, 00:08:33.516 { 00:08:33.516 "name": "BaseBdev3", 00:08:33.516 "uuid": "52bde137-734a-4960-ad80-100561b6988d", 00:08:33.516 "is_configured": true, 00:08:33.516 "data_offset": 0, 00:08:33.516 "data_size": 65536 00:08:33.516 } 00:08:33.516 ] 00:08:33.516 }' 00:08:33.516 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.516 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.085 [2024-11-26 12:51:51.576045] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.085 "name": "Existed_Raid", 00:08:34.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.085 "strip_size_kb": 64, 00:08:34.085 "state": "configuring", 00:08:34.085 "raid_level": "concat", 00:08:34.085 "superblock": false, 00:08:34.085 "num_base_bdevs": 3, 00:08:34.085 "num_base_bdevs_discovered": 1, 00:08:34.085 "num_base_bdevs_operational": 3, 00:08:34.085 "base_bdevs_list": [ 00:08:34.085 { 00:08:34.085 "name": "BaseBdev1", 00:08:34.085 "uuid": "8745ca43-a9fb-4e4d-b46b-991d2d51c892", 00:08:34.085 "is_configured": true, 00:08:34.085 "data_offset": 0, 00:08:34.085 "data_size": 65536 00:08:34.085 }, 00:08:34.085 { 00:08:34.085 "name": null, 00:08:34.085 "uuid": "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4", 00:08:34.085 "is_configured": false, 00:08:34.085 "data_offset": 0, 00:08:34.085 "data_size": 65536 00:08:34.085 }, 00:08:34.085 { 00:08:34.085 "name": null, 00:08:34.085 "uuid": "52bde137-734a-4960-ad80-100561b6988d", 00:08:34.085 "is_configured": false, 00:08:34.085 "data_offset": 0, 00:08:34.085 "data_size": 65536 00:08:34.085 } 00:08:34.085 ] 00:08:34.085 }' 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.085 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.344 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.344 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:34.344 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.344 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.603 [2024-11-26 12:51:52.051307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.603 "name": "Existed_Raid", 00:08:34.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.603 "strip_size_kb": 64, 00:08:34.603 "state": "configuring", 00:08:34.603 "raid_level": "concat", 00:08:34.603 "superblock": false, 00:08:34.603 "num_base_bdevs": 3, 00:08:34.603 "num_base_bdevs_discovered": 2, 00:08:34.603 "num_base_bdevs_operational": 3, 00:08:34.603 "base_bdevs_list": [ 00:08:34.603 { 00:08:34.603 "name": "BaseBdev1", 00:08:34.603 "uuid": "8745ca43-a9fb-4e4d-b46b-991d2d51c892", 00:08:34.603 "is_configured": true, 00:08:34.603 "data_offset": 0, 00:08:34.603 "data_size": 65536 00:08:34.603 }, 00:08:34.603 { 00:08:34.603 "name": null, 00:08:34.603 "uuid": "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4", 00:08:34.603 "is_configured": false, 00:08:34.603 "data_offset": 0, 00:08:34.603 "data_size": 65536 00:08:34.603 }, 00:08:34.603 { 00:08:34.603 "name": "BaseBdev3", 00:08:34.603 "uuid": "52bde137-734a-4960-ad80-100561b6988d", 00:08:34.603 "is_configured": true, 00:08:34.603 "data_offset": 0, 00:08:34.603 "data_size": 65536 00:08:34.603 } 00:08:34.603 ] 00:08:34.603 }' 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.603 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.862 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:34.862 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.862 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.862 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.862 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.121 [2024-11-26 12:51:52.546464] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.121 "name": "Existed_Raid", 00:08:35.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.121 "strip_size_kb": 64, 00:08:35.121 "state": "configuring", 00:08:35.121 "raid_level": "concat", 00:08:35.121 "superblock": false, 00:08:35.121 "num_base_bdevs": 3, 00:08:35.121 "num_base_bdevs_discovered": 1, 00:08:35.121 "num_base_bdevs_operational": 3, 00:08:35.121 "base_bdevs_list": [ 00:08:35.121 { 00:08:35.121 "name": null, 00:08:35.121 "uuid": "8745ca43-a9fb-4e4d-b46b-991d2d51c892", 00:08:35.121 "is_configured": false, 00:08:35.121 "data_offset": 0, 00:08:35.121 "data_size": 65536 00:08:35.121 }, 00:08:35.121 { 00:08:35.121 "name": null, 00:08:35.121 "uuid": "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4", 00:08:35.121 "is_configured": false, 00:08:35.121 "data_offset": 0, 00:08:35.121 "data_size": 65536 00:08:35.121 }, 00:08:35.121 { 00:08:35.121 "name": "BaseBdev3", 00:08:35.121 "uuid": "52bde137-734a-4960-ad80-100561b6988d", 00:08:35.121 "is_configured": true, 00:08:35.121 "data_offset": 0, 00:08:35.121 "data_size": 65536 00:08:35.121 } 00:08:35.121 ] 00:08:35.121 }' 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.121 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.380 [2024-11-26 12:51:52.988355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.380 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.380 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.380 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.380 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.380 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.380 "name": "Existed_Raid", 00:08:35.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.380 "strip_size_kb": 64, 00:08:35.380 "state": "configuring", 00:08:35.380 "raid_level": "concat", 00:08:35.380 "superblock": false, 00:08:35.380 "num_base_bdevs": 3, 00:08:35.380 "num_base_bdevs_discovered": 2, 00:08:35.380 "num_base_bdevs_operational": 3, 00:08:35.380 "base_bdevs_list": [ 00:08:35.380 { 00:08:35.380 "name": null, 00:08:35.380 "uuid": "8745ca43-a9fb-4e4d-b46b-991d2d51c892", 00:08:35.380 "is_configured": false, 00:08:35.380 "data_offset": 0, 00:08:35.380 "data_size": 65536 00:08:35.380 }, 00:08:35.380 { 00:08:35.380 "name": "BaseBdev2", 00:08:35.380 "uuid": "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4", 00:08:35.380 "is_configured": true, 00:08:35.380 "data_offset": 0, 00:08:35.380 "data_size": 65536 00:08:35.380 }, 00:08:35.380 { 00:08:35.380 "name": "BaseBdev3", 00:08:35.380 "uuid": "52bde137-734a-4960-ad80-100561b6988d", 00:08:35.380 "is_configured": true, 00:08:35.380 "data_offset": 0, 00:08:35.380 "data_size": 65536 00:08:35.380 } 00:08:35.380 ] 00:08:35.380 }' 00:08:35.380 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.380 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8745ca43-a9fb-4e4d-b46b-991d2d51c892 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.000 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.000 [2024-11-26 12:51:53.542193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:36.000 [2024-11-26 12:51:53.542301] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:36.000 [2024-11-26 12:51:53.542329] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:36.000 [2024-11-26 12:51:53.542605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:36.001 [2024-11-26 12:51:53.542758] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:36.001 [2024-11-26 12:51:53.542798] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:36.001 [2024-11-26 12:51:53.543004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.001 NewBaseBdev 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.001 [ 00:08:36.001 { 00:08:36.001 "name": "NewBaseBdev", 00:08:36.001 "aliases": [ 00:08:36.001 "8745ca43-a9fb-4e4d-b46b-991d2d51c892" 00:08:36.001 ], 00:08:36.001 "product_name": "Malloc disk", 00:08:36.001 "block_size": 512, 00:08:36.001 "num_blocks": 65536, 00:08:36.001 "uuid": "8745ca43-a9fb-4e4d-b46b-991d2d51c892", 00:08:36.001 "assigned_rate_limits": { 00:08:36.001 "rw_ios_per_sec": 0, 00:08:36.001 "rw_mbytes_per_sec": 0, 00:08:36.001 "r_mbytes_per_sec": 0, 00:08:36.001 "w_mbytes_per_sec": 0 00:08:36.001 }, 00:08:36.001 "claimed": true, 00:08:36.001 "claim_type": "exclusive_write", 00:08:36.001 "zoned": false, 00:08:36.001 "supported_io_types": { 00:08:36.001 "read": true, 00:08:36.001 "write": true, 00:08:36.001 "unmap": true, 00:08:36.001 "flush": true, 00:08:36.001 "reset": true, 00:08:36.001 "nvme_admin": false, 00:08:36.001 "nvme_io": false, 00:08:36.001 "nvme_io_md": false, 00:08:36.001 "write_zeroes": true, 00:08:36.001 "zcopy": true, 00:08:36.001 "get_zone_info": false, 00:08:36.001 "zone_management": false, 00:08:36.001 "zone_append": false, 00:08:36.001 "compare": false, 00:08:36.001 "compare_and_write": false, 00:08:36.001 "abort": true, 00:08:36.001 "seek_hole": false, 00:08:36.001 "seek_data": false, 00:08:36.001 "copy": true, 00:08:36.001 "nvme_iov_md": false 00:08:36.001 }, 00:08:36.001 "memory_domains": [ 00:08:36.001 { 00:08:36.001 "dma_device_id": "system", 00:08:36.001 "dma_device_type": 1 00:08:36.001 }, 00:08:36.001 { 00:08:36.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.001 "dma_device_type": 2 00:08:36.001 } 00:08:36.001 ], 00:08:36.001 "driver_specific": {} 00:08:36.001 } 00:08:36.001 ] 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.001 "name": "Existed_Raid", 00:08:36.001 "uuid": "8c21734d-f6bf-4ed2-b50a-847b37108340", 00:08:36.001 "strip_size_kb": 64, 00:08:36.001 "state": "online", 00:08:36.001 "raid_level": "concat", 00:08:36.001 "superblock": false, 00:08:36.001 "num_base_bdevs": 3, 00:08:36.001 "num_base_bdevs_discovered": 3, 00:08:36.001 "num_base_bdevs_operational": 3, 00:08:36.001 "base_bdevs_list": [ 00:08:36.001 { 00:08:36.001 "name": "NewBaseBdev", 00:08:36.001 "uuid": "8745ca43-a9fb-4e4d-b46b-991d2d51c892", 00:08:36.001 "is_configured": true, 00:08:36.001 "data_offset": 0, 00:08:36.001 "data_size": 65536 00:08:36.001 }, 00:08:36.001 { 00:08:36.001 "name": "BaseBdev2", 00:08:36.001 "uuid": "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4", 00:08:36.001 "is_configured": true, 00:08:36.001 "data_offset": 0, 00:08:36.001 "data_size": 65536 00:08:36.001 }, 00:08:36.001 { 00:08:36.001 "name": "BaseBdev3", 00:08:36.001 "uuid": "52bde137-734a-4960-ad80-100561b6988d", 00:08:36.001 "is_configured": true, 00:08:36.001 "data_offset": 0, 00:08:36.001 "data_size": 65536 00:08:36.001 } 00:08:36.001 ] 00:08:36.001 }' 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.001 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.566 [2024-11-26 12:51:54.069551] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.566 "name": "Existed_Raid", 00:08:36.566 "aliases": [ 00:08:36.566 "8c21734d-f6bf-4ed2-b50a-847b37108340" 00:08:36.566 ], 00:08:36.566 "product_name": "Raid Volume", 00:08:36.566 "block_size": 512, 00:08:36.566 "num_blocks": 196608, 00:08:36.566 "uuid": "8c21734d-f6bf-4ed2-b50a-847b37108340", 00:08:36.566 "assigned_rate_limits": { 00:08:36.566 "rw_ios_per_sec": 0, 00:08:36.566 "rw_mbytes_per_sec": 0, 00:08:36.566 "r_mbytes_per_sec": 0, 00:08:36.566 "w_mbytes_per_sec": 0 00:08:36.566 }, 00:08:36.566 "claimed": false, 00:08:36.566 "zoned": false, 00:08:36.566 "supported_io_types": { 00:08:36.566 "read": true, 00:08:36.566 "write": true, 00:08:36.566 "unmap": true, 00:08:36.566 "flush": true, 00:08:36.566 "reset": true, 00:08:36.566 "nvme_admin": false, 00:08:36.566 "nvme_io": false, 00:08:36.566 "nvme_io_md": false, 00:08:36.566 "write_zeroes": true, 00:08:36.566 "zcopy": false, 00:08:36.566 "get_zone_info": false, 00:08:36.566 "zone_management": false, 00:08:36.566 "zone_append": false, 00:08:36.566 "compare": false, 00:08:36.566 "compare_and_write": false, 00:08:36.566 "abort": false, 00:08:36.566 "seek_hole": false, 00:08:36.566 "seek_data": false, 00:08:36.566 "copy": false, 00:08:36.566 "nvme_iov_md": false 00:08:36.566 }, 00:08:36.566 "memory_domains": [ 00:08:36.566 { 00:08:36.566 "dma_device_id": "system", 00:08:36.566 "dma_device_type": 1 00:08:36.566 }, 00:08:36.566 { 00:08:36.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.566 "dma_device_type": 2 00:08:36.566 }, 00:08:36.566 { 00:08:36.566 "dma_device_id": "system", 00:08:36.566 "dma_device_type": 1 00:08:36.566 }, 00:08:36.566 { 00:08:36.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.566 "dma_device_type": 2 00:08:36.566 }, 00:08:36.566 { 00:08:36.566 "dma_device_id": "system", 00:08:36.566 "dma_device_type": 1 00:08:36.566 }, 00:08:36.566 { 00:08:36.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.566 "dma_device_type": 2 00:08:36.566 } 00:08:36.566 ], 00:08:36.566 "driver_specific": { 00:08:36.566 "raid": { 00:08:36.566 "uuid": "8c21734d-f6bf-4ed2-b50a-847b37108340", 00:08:36.566 "strip_size_kb": 64, 00:08:36.566 "state": "online", 00:08:36.566 "raid_level": "concat", 00:08:36.566 "superblock": false, 00:08:36.566 "num_base_bdevs": 3, 00:08:36.566 "num_base_bdevs_discovered": 3, 00:08:36.566 "num_base_bdevs_operational": 3, 00:08:36.566 "base_bdevs_list": [ 00:08:36.566 { 00:08:36.566 "name": "NewBaseBdev", 00:08:36.566 "uuid": "8745ca43-a9fb-4e4d-b46b-991d2d51c892", 00:08:36.566 "is_configured": true, 00:08:36.566 "data_offset": 0, 00:08:36.566 "data_size": 65536 00:08:36.566 }, 00:08:36.566 { 00:08:36.566 "name": "BaseBdev2", 00:08:36.566 "uuid": "63f64cf7-9a28-4bc9-9b05-c1d7bb981dc4", 00:08:36.566 "is_configured": true, 00:08:36.566 "data_offset": 0, 00:08:36.566 "data_size": 65536 00:08:36.566 }, 00:08:36.566 { 00:08:36.566 "name": "BaseBdev3", 00:08:36.566 "uuid": "52bde137-734a-4960-ad80-100561b6988d", 00:08:36.566 "is_configured": true, 00:08:36.566 "data_offset": 0, 00:08:36.566 "data_size": 65536 00:08:36.566 } 00:08:36.566 ] 00:08:36.566 } 00:08:36.566 } 00:08:36.566 }' 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:36.566 BaseBdev2 00:08:36.566 BaseBdev3' 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.566 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.567 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.567 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.567 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.567 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.567 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.567 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.567 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.567 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.824 [2024-11-26 12:51:54.328841] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.824 [2024-11-26 12:51:54.328907] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.824 [2024-11-26 12:51:54.329004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.824 [2024-11-26 12:51:54.329069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.824 [2024-11-26 12:51:54.329139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76967 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76967 ']' 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76967 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76967 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76967' 00:08:36.824 killing process with pid 76967 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76967 00:08:36.824 [2024-11-26 12:51:54.368556] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.824 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76967 00:08:36.824 [2024-11-26 12:51:54.398793] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:37.082 00:08:37.082 real 0m8.583s 00:08:37.082 user 0m14.591s 00:08:37.082 sys 0m1.738s 00:08:37.082 ************************************ 00:08:37.082 END TEST raid_state_function_test 00:08:37.082 ************************************ 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.082 12:51:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:37.082 12:51:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:37.082 12:51:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.082 12:51:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.082 ************************************ 00:08:37.082 START TEST raid_state_function_test_sb 00:08:37.082 ************************************ 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77572 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77572' 00:08:37.082 Process raid pid: 77572 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77572 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77572 ']' 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.082 12:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.341 [2024-11-26 12:51:54.810225] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:37.341 [2024-11-26 12:51:54.810437] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.342 [2024-11-26 12:51:54.968270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.342 [2024-11-26 12:51:55.012560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.600 [2024-11-26 12:51:55.054277] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.600 [2024-11-26 12:51:55.054393] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.167 12:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.167 12:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:38.167 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.167 12:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.167 12:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.167 [2024-11-26 12:51:55.635358] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.167 [2024-11-26 12:51:55.635406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.167 [2024-11-26 12:51:55.635426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.167 [2024-11-26 12:51:55.635436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.167 [2024-11-26 12:51:55.635442] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.168 [2024-11-26 12:51:55.635454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.168 "name": "Existed_Raid", 00:08:38.168 "uuid": "d5de3859-550f-4224-8089-ea64164b5f6d", 00:08:38.168 "strip_size_kb": 64, 00:08:38.168 "state": "configuring", 00:08:38.168 "raid_level": "concat", 00:08:38.168 "superblock": true, 00:08:38.168 "num_base_bdevs": 3, 00:08:38.168 "num_base_bdevs_discovered": 0, 00:08:38.168 "num_base_bdevs_operational": 3, 00:08:38.168 "base_bdevs_list": [ 00:08:38.168 { 00:08:38.168 "name": "BaseBdev1", 00:08:38.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.168 "is_configured": false, 00:08:38.168 "data_offset": 0, 00:08:38.168 "data_size": 0 00:08:38.168 }, 00:08:38.168 { 00:08:38.168 "name": "BaseBdev2", 00:08:38.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.168 "is_configured": false, 00:08:38.168 "data_offset": 0, 00:08:38.168 "data_size": 0 00:08:38.168 }, 00:08:38.168 { 00:08:38.168 "name": "BaseBdev3", 00:08:38.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.168 "is_configured": false, 00:08:38.168 "data_offset": 0, 00:08:38.168 "data_size": 0 00:08:38.168 } 00:08:38.168 ] 00:08:38.168 }' 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.168 12:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.427 [2024-11-26 12:51:56.074617] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.427 [2024-11-26 12:51:56.074701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.427 [2024-11-26 12:51:56.086643] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.427 [2024-11-26 12:51:56.086737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.427 [2024-11-26 12:51:56.086764] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.427 [2024-11-26 12:51:56.086785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.427 [2024-11-26 12:51:56.086802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.427 [2024-11-26 12:51:56.086822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.427 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.686 [2024-11-26 12:51:56.107709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.686 BaseBdev1 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.686 [ 00:08:38.686 { 00:08:38.686 "name": "BaseBdev1", 00:08:38.686 "aliases": [ 00:08:38.686 "8c58c3dc-a5d9-45e1-9d67-454daa5fe53d" 00:08:38.686 ], 00:08:38.686 "product_name": "Malloc disk", 00:08:38.686 "block_size": 512, 00:08:38.686 "num_blocks": 65536, 00:08:38.686 "uuid": "8c58c3dc-a5d9-45e1-9d67-454daa5fe53d", 00:08:38.686 "assigned_rate_limits": { 00:08:38.686 "rw_ios_per_sec": 0, 00:08:38.686 "rw_mbytes_per_sec": 0, 00:08:38.686 "r_mbytes_per_sec": 0, 00:08:38.686 "w_mbytes_per_sec": 0 00:08:38.686 }, 00:08:38.686 "claimed": true, 00:08:38.686 "claim_type": "exclusive_write", 00:08:38.686 "zoned": false, 00:08:38.686 "supported_io_types": { 00:08:38.686 "read": true, 00:08:38.686 "write": true, 00:08:38.686 "unmap": true, 00:08:38.686 "flush": true, 00:08:38.686 "reset": true, 00:08:38.686 "nvme_admin": false, 00:08:38.686 "nvme_io": false, 00:08:38.686 "nvme_io_md": false, 00:08:38.686 "write_zeroes": true, 00:08:38.686 "zcopy": true, 00:08:38.686 "get_zone_info": false, 00:08:38.686 "zone_management": false, 00:08:38.686 "zone_append": false, 00:08:38.686 "compare": false, 00:08:38.686 "compare_and_write": false, 00:08:38.686 "abort": true, 00:08:38.686 "seek_hole": false, 00:08:38.686 "seek_data": false, 00:08:38.686 "copy": true, 00:08:38.686 "nvme_iov_md": false 00:08:38.686 }, 00:08:38.686 "memory_domains": [ 00:08:38.686 { 00:08:38.686 "dma_device_id": "system", 00:08:38.686 "dma_device_type": 1 00:08:38.686 }, 00:08:38.686 { 00:08:38.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.686 "dma_device_type": 2 00:08:38.686 } 00:08:38.686 ], 00:08:38.686 "driver_specific": {} 00:08:38.686 } 00:08:38.686 ] 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.686 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.687 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.687 "name": "Existed_Raid", 00:08:38.687 "uuid": "73259fc7-a37f-452c-9de8-5b00167f44ad", 00:08:38.687 "strip_size_kb": 64, 00:08:38.687 "state": "configuring", 00:08:38.687 "raid_level": "concat", 00:08:38.687 "superblock": true, 00:08:38.687 "num_base_bdevs": 3, 00:08:38.687 "num_base_bdevs_discovered": 1, 00:08:38.687 "num_base_bdevs_operational": 3, 00:08:38.687 "base_bdevs_list": [ 00:08:38.687 { 00:08:38.687 "name": "BaseBdev1", 00:08:38.687 "uuid": "8c58c3dc-a5d9-45e1-9d67-454daa5fe53d", 00:08:38.687 "is_configured": true, 00:08:38.687 "data_offset": 2048, 00:08:38.687 "data_size": 63488 00:08:38.687 }, 00:08:38.687 { 00:08:38.687 "name": "BaseBdev2", 00:08:38.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.687 "is_configured": false, 00:08:38.687 "data_offset": 0, 00:08:38.687 "data_size": 0 00:08:38.687 }, 00:08:38.687 { 00:08:38.687 "name": "BaseBdev3", 00:08:38.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.687 "is_configured": false, 00:08:38.687 "data_offset": 0, 00:08:38.687 "data_size": 0 00:08:38.687 } 00:08:38.687 ] 00:08:38.687 }' 00:08:38.687 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.687 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.946 [2024-11-26 12:51:56.551091] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.946 [2024-11-26 12:51:56.551198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.946 [2024-11-26 12:51:56.563127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.946 [2024-11-26 12:51:56.565101] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.946 [2024-11-26 12:51:56.565144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.946 [2024-11-26 12:51:56.565154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.946 [2024-11-26 12:51:56.565164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.946 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.947 "name": "Existed_Raid", 00:08:38.947 "uuid": "02b507ce-6125-4b09-bb25-80084eb04991", 00:08:38.947 "strip_size_kb": 64, 00:08:38.947 "state": "configuring", 00:08:38.947 "raid_level": "concat", 00:08:38.947 "superblock": true, 00:08:38.947 "num_base_bdevs": 3, 00:08:38.947 "num_base_bdevs_discovered": 1, 00:08:38.947 "num_base_bdevs_operational": 3, 00:08:38.947 "base_bdevs_list": [ 00:08:38.947 { 00:08:38.947 "name": "BaseBdev1", 00:08:38.947 "uuid": "8c58c3dc-a5d9-45e1-9d67-454daa5fe53d", 00:08:38.947 "is_configured": true, 00:08:38.947 "data_offset": 2048, 00:08:38.947 "data_size": 63488 00:08:38.947 }, 00:08:38.947 { 00:08:38.947 "name": "BaseBdev2", 00:08:38.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.947 "is_configured": false, 00:08:38.947 "data_offset": 0, 00:08:38.947 "data_size": 0 00:08:38.947 }, 00:08:38.947 { 00:08:38.947 "name": "BaseBdev3", 00:08:38.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.947 "is_configured": false, 00:08:38.947 "data_offset": 0, 00:08:38.947 "data_size": 0 00:08:38.947 } 00:08:38.947 ] 00:08:38.947 }' 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.947 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.516 12:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:39.516 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.516 12:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.516 [2024-11-26 12:51:57.023552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.516 BaseBdev2 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.516 [ 00:08:39.516 { 00:08:39.516 "name": "BaseBdev2", 00:08:39.516 "aliases": [ 00:08:39.516 "2213355b-23f6-4ec8-b724-3e2ac40cacd0" 00:08:39.516 ], 00:08:39.516 "product_name": "Malloc disk", 00:08:39.516 "block_size": 512, 00:08:39.516 "num_blocks": 65536, 00:08:39.516 "uuid": "2213355b-23f6-4ec8-b724-3e2ac40cacd0", 00:08:39.516 "assigned_rate_limits": { 00:08:39.516 "rw_ios_per_sec": 0, 00:08:39.516 "rw_mbytes_per_sec": 0, 00:08:39.516 "r_mbytes_per_sec": 0, 00:08:39.516 "w_mbytes_per_sec": 0 00:08:39.516 }, 00:08:39.516 "claimed": true, 00:08:39.516 "claim_type": "exclusive_write", 00:08:39.516 "zoned": false, 00:08:39.516 "supported_io_types": { 00:08:39.516 "read": true, 00:08:39.516 "write": true, 00:08:39.516 "unmap": true, 00:08:39.516 "flush": true, 00:08:39.516 "reset": true, 00:08:39.516 "nvme_admin": false, 00:08:39.516 "nvme_io": false, 00:08:39.516 "nvme_io_md": false, 00:08:39.516 "write_zeroes": true, 00:08:39.516 "zcopy": true, 00:08:39.516 "get_zone_info": false, 00:08:39.516 "zone_management": false, 00:08:39.516 "zone_append": false, 00:08:39.516 "compare": false, 00:08:39.516 "compare_and_write": false, 00:08:39.516 "abort": true, 00:08:39.516 "seek_hole": false, 00:08:39.516 "seek_data": false, 00:08:39.516 "copy": true, 00:08:39.516 "nvme_iov_md": false 00:08:39.516 }, 00:08:39.516 "memory_domains": [ 00:08:39.516 { 00:08:39.516 "dma_device_id": "system", 00:08:39.516 "dma_device_type": 1 00:08:39.516 }, 00:08:39.516 { 00:08:39.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.516 "dma_device_type": 2 00:08:39.516 } 00:08:39.516 ], 00:08:39.516 "driver_specific": {} 00:08:39.516 } 00:08:39.516 ] 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.516 "name": "Existed_Raid", 00:08:39.516 "uuid": "02b507ce-6125-4b09-bb25-80084eb04991", 00:08:39.516 "strip_size_kb": 64, 00:08:39.516 "state": "configuring", 00:08:39.516 "raid_level": "concat", 00:08:39.516 "superblock": true, 00:08:39.516 "num_base_bdevs": 3, 00:08:39.516 "num_base_bdevs_discovered": 2, 00:08:39.516 "num_base_bdevs_operational": 3, 00:08:39.516 "base_bdevs_list": [ 00:08:39.516 { 00:08:39.516 "name": "BaseBdev1", 00:08:39.516 "uuid": "8c58c3dc-a5d9-45e1-9d67-454daa5fe53d", 00:08:39.516 "is_configured": true, 00:08:39.516 "data_offset": 2048, 00:08:39.516 "data_size": 63488 00:08:39.516 }, 00:08:39.516 { 00:08:39.516 "name": "BaseBdev2", 00:08:39.516 "uuid": "2213355b-23f6-4ec8-b724-3e2ac40cacd0", 00:08:39.516 "is_configured": true, 00:08:39.516 "data_offset": 2048, 00:08:39.516 "data_size": 63488 00:08:39.516 }, 00:08:39.516 { 00:08:39.516 "name": "BaseBdev3", 00:08:39.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.516 "is_configured": false, 00:08:39.516 "data_offset": 0, 00:08:39.516 "data_size": 0 00:08:39.516 } 00:08:39.516 ] 00:08:39.516 }' 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.516 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.085 BaseBdev3 00:08:40.085 [2024-11-26 12:51:57.505638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.085 [2024-11-26 12:51:57.505830] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:40.085 [2024-11-26 12:51:57.505861] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.085 [2024-11-26 12:51:57.506150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:40.085 [2024-11-26 12:51:57.506287] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:40.085 [2024-11-26 12:51:57.506297] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:40.085 [2024-11-26 12:51:57.506437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.085 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.085 [ 00:08:40.085 { 00:08:40.085 "name": "BaseBdev3", 00:08:40.085 "aliases": [ 00:08:40.085 "f5434aa1-1719-4715-a342-16ec15104e4f" 00:08:40.085 ], 00:08:40.085 "product_name": "Malloc disk", 00:08:40.085 "block_size": 512, 00:08:40.085 "num_blocks": 65536, 00:08:40.085 "uuid": "f5434aa1-1719-4715-a342-16ec15104e4f", 00:08:40.085 "assigned_rate_limits": { 00:08:40.085 "rw_ios_per_sec": 0, 00:08:40.085 "rw_mbytes_per_sec": 0, 00:08:40.085 "r_mbytes_per_sec": 0, 00:08:40.085 "w_mbytes_per_sec": 0 00:08:40.085 }, 00:08:40.085 "claimed": true, 00:08:40.085 "claim_type": "exclusive_write", 00:08:40.085 "zoned": false, 00:08:40.085 "supported_io_types": { 00:08:40.085 "read": true, 00:08:40.085 "write": true, 00:08:40.085 "unmap": true, 00:08:40.085 "flush": true, 00:08:40.085 "reset": true, 00:08:40.085 "nvme_admin": false, 00:08:40.085 "nvme_io": false, 00:08:40.085 "nvme_io_md": false, 00:08:40.085 "write_zeroes": true, 00:08:40.085 "zcopy": true, 00:08:40.085 "get_zone_info": false, 00:08:40.085 "zone_management": false, 00:08:40.085 "zone_append": false, 00:08:40.085 "compare": false, 00:08:40.085 "compare_and_write": false, 00:08:40.085 "abort": true, 00:08:40.085 "seek_hole": false, 00:08:40.085 "seek_data": false, 00:08:40.085 "copy": true, 00:08:40.085 "nvme_iov_md": false 00:08:40.085 }, 00:08:40.085 "memory_domains": [ 00:08:40.085 { 00:08:40.085 "dma_device_id": "system", 00:08:40.085 "dma_device_type": 1 00:08:40.086 }, 00:08:40.086 { 00:08:40.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.086 "dma_device_type": 2 00:08:40.086 } 00:08:40.086 ], 00:08:40.086 "driver_specific": {} 00:08:40.086 } 00:08:40.086 ] 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.086 "name": "Existed_Raid", 00:08:40.086 "uuid": "02b507ce-6125-4b09-bb25-80084eb04991", 00:08:40.086 "strip_size_kb": 64, 00:08:40.086 "state": "online", 00:08:40.086 "raid_level": "concat", 00:08:40.086 "superblock": true, 00:08:40.086 "num_base_bdevs": 3, 00:08:40.086 "num_base_bdevs_discovered": 3, 00:08:40.086 "num_base_bdevs_operational": 3, 00:08:40.086 "base_bdevs_list": [ 00:08:40.086 { 00:08:40.086 "name": "BaseBdev1", 00:08:40.086 "uuid": "8c58c3dc-a5d9-45e1-9d67-454daa5fe53d", 00:08:40.086 "is_configured": true, 00:08:40.086 "data_offset": 2048, 00:08:40.086 "data_size": 63488 00:08:40.086 }, 00:08:40.086 { 00:08:40.086 "name": "BaseBdev2", 00:08:40.086 "uuid": "2213355b-23f6-4ec8-b724-3e2ac40cacd0", 00:08:40.086 "is_configured": true, 00:08:40.086 "data_offset": 2048, 00:08:40.086 "data_size": 63488 00:08:40.086 }, 00:08:40.086 { 00:08:40.086 "name": "BaseBdev3", 00:08:40.086 "uuid": "f5434aa1-1719-4715-a342-16ec15104e4f", 00:08:40.086 "is_configured": true, 00:08:40.086 "data_offset": 2048, 00:08:40.086 "data_size": 63488 00:08:40.086 } 00:08:40.086 ] 00:08:40.086 }' 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.086 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.345 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.345 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.345 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.345 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.345 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.345 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.345 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.345 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.345 12:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.345 12:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.345 [2024-11-26 12:51:57.993090] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.345 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.605 "name": "Existed_Raid", 00:08:40.605 "aliases": [ 00:08:40.605 "02b507ce-6125-4b09-bb25-80084eb04991" 00:08:40.605 ], 00:08:40.605 "product_name": "Raid Volume", 00:08:40.605 "block_size": 512, 00:08:40.605 "num_blocks": 190464, 00:08:40.605 "uuid": "02b507ce-6125-4b09-bb25-80084eb04991", 00:08:40.605 "assigned_rate_limits": { 00:08:40.605 "rw_ios_per_sec": 0, 00:08:40.605 "rw_mbytes_per_sec": 0, 00:08:40.605 "r_mbytes_per_sec": 0, 00:08:40.605 "w_mbytes_per_sec": 0 00:08:40.605 }, 00:08:40.605 "claimed": false, 00:08:40.605 "zoned": false, 00:08:40.605 "supported_io_types": { 00:08:40.605 "read": true, 00:08:40.605 "write": true, 00:08:40.605 "unmap": true, 00:08:40.605 "flush": true, 00:08:40.605 "reset": true, 00:08:40.605 "nvme_admin": false, 00:08:40.605 "nvme_io": false, 00:08:40.605 "nvme_io_md": false, 00:08:40.605 "write_zeroes": true, 00:08:40.605 "zcopy": false, 00:08:40.605 "get_zone_info": false, 00:08:40.605 "zone_management": false, 00:08:40.605 "zone_append": false, 00:08:40.605 "compare": false, 00:08:40.605 "compare_and_write": false, 00:08:40.605 "abort": false, 00:08:40.605 "seek_hole": false, 00:08:40.605 "seek_data": false, 00:08:40.605 "copy": false, 00:08:40.605 "nvme_iov_md": false 00:08:40.605 }, 00:08:40.605 "memory_domains": [ 00:08:40.605 { 00:08:40.605 "dma_device_id": "system", 00:08:40.605 "dma_device_type": 1 00:08:40.605 }, 00:08:40.605 { 00:08:40.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.605 "dma_device_type": 2 00:08:40.605 }, 00:08:40.605 { 00:08:40.605 "dma_device_id": "system", 00:08:40.605 "dma_device_type": 1 00:08:40.605 }, 00:08:40.605 { 00:08:40.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.605 "dma_device_type": 2 00:08:40.605 }, 00:08:40.605 { 00:08:40.605 "dma_device_id": "system", 00:08:40.605 "dma_device_type": 1 00:08:40.605 }, 00:08:40.605 { 00:08:40.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.605 "dma_device_type": 2 00:08:40.605 } 00:08:40.605 ], 00:08:40.605 "driver_specific": { 00:08:40.605 "raid": { 00:08:40.605 "uuid": "02b507ce-6125-4b09-bb25-80084eb04991", 00:08:40.605 "strip_size_kb": 64, 00:08:40.605 "state": "online", 00:08:40.605 "raid_level": "concat", 00:08:40.605 "superblock": true, 00:08:40.605 "num_base_bdevs": 3, 00:08:40.605 "num_base_bdevs_discovered": 3, 00:08:40.605 "num_base_bdevs_operational": 3, 00:08:40.605 "base_bdevs_list": [ 00:08:40.605 { 00:08:40.605 "name": "BaseBdev1", 00:08:40.605 "uuid": "8c58c3dc-a5d9-45e1-9d67-454daa5fe53d", 00:08:40.605 "is_configured": true, 00:08:40.605 "data_offset": 2048, 00:08:40.605 "data_size": 63488 00:08:40.605 }, 00:08:40.605 { 00:08:40.605 "name": "BaseBdev2", 00:08:40.605 "uuid": "2213355b-23f6-4ec8-b724-3e2ac40cacd0", 00:08:40.605 "is_configured": true, 00:08:40.605 "data_offset": 2048, 00:08:40.605 "data_size": 63488 00:08:40.605 }, 00:08:40.605 { 00:08:40.605 "name": "BaseBdev3", 00:08:40.605 "uuid": "f5434aa1-1719-4715-a342-16ec15104e4f", 00:08:40.605 "is_configured": true, 00:08:40.605 "data_offset": 2048, 00:08:40.605 "data_size": 63488 00:08:40.605 } 00:08:40.605 ] 00:08:40.605 } 00:08:40.605 } 00:08:40.605 }' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:40.605 BaseBdev2 00:08:40.605 BaseBdev3' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.605 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.605 [2024-11-26 12:51:58.252442] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.606 [2024-11-26 12:51:58.252508] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.606 [2024-11-26 12:51:58.252584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.606 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.865 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.865 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.865 "name": "Existed_Raid", 00:08:40.865 "uuid": "02b507ce-6125-4b09-bb25-80084eb04991", 00:08:40.865 "strip_size_kb": 64, 00:08:40.865 "state": "offline", 00:08:40.865 "raid_level": "concat", 00:08:40.865 "superblock": true, 00:08:40.865 "num_base_bdevs": 3, 00:08:40.865 "num_base_bdevs_discovered": 2, 00:08:40.865 "num_base_bdevs_operational": 2, 00:08:40.865 "base_bdevs_list": [ 00:08:40.865 { 00:08:40.865 "name": null, 00:08:40.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.865 "is_configured": false, 00:08:40.865 "data_offset": 0, 00:08:40.865 "data_size": 63488 00:08:40.865 }, 00:08:40.865 { 00:08:40.865 "name": "BaseBdev2", 00:08:40.865 "uuid": "2213355b-23f6-4ec8-b724-3e2ac40cacd0", 00:08:40.865 "is_configured": true, 00:08:40.865 "data_offset": 2048, 00:08:40.865 "data_size": 63488 00:08:40.865 }, 00:08:40.865 { 00:08:40.865 "name": "BaseBdev3", 00:08:40.865 "uuid": "f5434aa1-1719-4715-a342-16ec15104e4f", 00:08:40.865 "is_configured": true, 00:08:40.865 "data_offset": 2048, 00:08:40.865 "data_size": 63488 00:08:40.865 } 00:08:40.865 ] 00:08:40.865 }' 00:08:40.865 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.865 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.125 [2024-11-26 12:51:58.758941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.125 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.385 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.385 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.385 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:41.385 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.385 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.385 [2024-11-26 12:51:58.829965] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:41.385 [2024-11-26 12:51:58.830008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 BaseBdev2 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 [ 00:08:41.386 { 00:08:41.386 "name": "BaseBdev2", 00:08:41.386 "aliases": [ 00:08:41.386 "752e579b-0417-419c-9bf3-c131d61ddb93" 00:08:41.386 ], 00:08:41.386 "product_name": "Malloc disk", 00:08:41.386 "block_size": 512, 00:08:41.386 "num_blocks": 65536, 00:08:41.386 "uuid": "752e579b-0417-419c-9bf3-c131d61ddb93", 00:08:41.386 "assigned_rate_limits": { 00:08:41.386 "rw_ios_per_sec": 0, 00:08:41.386 "rw_mbytes_per_sec": 0, 00:08:41.386 "r_mbytes_per_sec": 0, 00:08:41.386 "w_mbytes_per_sec": 0 00:08:41.386 }, 00:08:41.386 "claimed": false, 00:08:41.386 "zoned": false, 00:08:41.386 "supported_io_types": { 00:08:41.386 "read": true, 00:08:41.386 "write": true, 00:08:41.386 "unmap": true, 00:08:41.386 "flush": true, 00:08:41.386 "reset": true, 00:08:41.386 "nvme_admin": false, 00:08:41.386 "nvme_io": false, 00:08:41.386 "nvme_io_md": false, 00:08:41.386 "write_zeroes": true, 00:08:41.386 "zcopy": true, 00:08:41.386 "get_zone_info": false, 00:08:41.386 "zone_management": false, 00:08:41.386 "zone_append": false, 00:08:41.386 "compare": false, 00:08:41.386 "compare_and_write": false, 00:08:41.386 "abort": true, 00:08:41.386 "seek_hole": false, 00:08:41.386 "seek_data": false, 00:08:41.386 "copy": true, 00:08:41.386 "nvme_iov_md": false 00:08:41.386 }, 00:08:41.386 "memory_domains": [ 00:08:41.386 { 00:08:41.386 "dma_device_id": "system", 00:08:41.386 "dma_device_type": 1 00:08:41.386 }, 00:08:41.386 { 00:08:41.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.386 "dma_device_type": 2 00:08:41.386 } 00:08:41.386 ], 00:08:41.386 "driver_specific": {} 00:08:41.386 } 00:08:41.386 ] 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 BaseBdev3 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.386 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 [ 00:08:41.386 { 00:08:41.386 "name": "BaseBdev3", 00:08:41.386 "aliases": [ 00:08:41.386 "7242f2c7-7e27-4e60-82e0-c8e08538c906" 00:08:41.386 ], 00:08:41.386 "product_name": "Malloc disk", 00:08:41.386 "block_size": 512, 00:08:41.386 "num_blocks": 65536, 00:08:41.386 "uuid": "7242f2c7-7e27-4e60-82e0-c8e08538c906", 00:08:41.386 "assigned_rate_limits": { 00:08:41.386 "rw_ios_per_sec": 0, 00:08:41.386 "rw_mbytes_per_sec": 0, 00:08:41.386 "r_mbytes_per_sec": 0, 00:08:41.386 "w_mbytes_per_sec": 0 00:08:41.386 }, 00:08:41.386 "claimed": false, 00:08:41.386 "zoned": false, 00:08:41.386 "supported_io_types": { 00:08:41.386 "read": true, 00:08:41.386 "write": true, 00:08:41.386 "unmap": true, 00:08:41.386 "flush": true, 00:08:41.386 "reset": true, 00:08:41.386 "nvme_admin": false, 00:08:41.386 "nvme_io": false, 00:08:41.386 "nvme_io_md": false, 00:08:41.387 "write_zeroes": true, 00:08:41.387 "zcopy": true, 00:08:41.387 "get_zone_info": false, 00:08:41.387 "zone_management": false, 00:08:41.387 "zone_append": false, 00:08:41.387 "compare": false, 00:08:41.387 "compare_and_write": false, 00:08:41.387 "abort": true, 00:08:41.387 "seek_hole": false, 00:08:41.387 "seek_data": false, 00:08:41.387 "copy": true, 00:08:41.387 "nvme_iov_md": false 00:08:41.387 }, 00:08:41.387 "memory_domains": [ 00:08:41.387 { 00:08:41.387 "dma_device_id": "system", 00:08:41.387 "dma_device_type": 1 00:08:41.387 }, 00:08:41.387 { 00:08:41.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.387 "dma_device_type": 2 00:08:41.387 } 00:08:41.387 ], 00:08:41.387 "driver_specific": {} 00:08:41.387 } 00:08:41.387 ] 00:08:41.387 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.387 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:41.387 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:41.387 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.387 12:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.387 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.387 12:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.387 [2024-11-26 12:51:59.004444] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.387 [2024-11-26 12:51:59.004543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.387 [2024-11-26 12:51:59.004583] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.387 [2024-11-26 12:51:59.006386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.387 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.648 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.648 "name": "Existed_Raid", 00:08:41.648 "uuid": "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f", 00:08:41.648 "strip_size_kb": 64, 00:08:41.648 "state": "configuring", 00:08:41.648 "raid_level": "concat", 00:08:41.648 "superblock": true, 00:08:41.648 "num_base_bdevs": 3, 00:08:41.648 "num_base_bdevs_discovered": 2, 00:08:41.648 "num_base_bdevs_operational": 3, 00:08:41.648 "base_bdevs_list": [ 00:08:41.648 { 00:08:41.648 "name": "BaseBdev1", 00:08:41.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.648 "is_configured": false, 00:08:41.648 "data_offset": 0, 00:08:41.648 "data_size": 0 00:08:41.648 }, 00:08:41.648 { 00:08:41.648 "name": "BaseBdev2", 00:08:41.648 "uuid": "752e579b-0417-419c-9bf3-c131d61ddb93", 00:08:41.648 "is_configured": true, 00:08:41.648 "data_offset": 2048, 00:08:41.648 "data_size": 63488 00:08:41.648 }, 00:08:41.648 { 00:08:41.648 "name": "BaseBdev3", 00:08:41.648 "uuid": "7242f2c7-7e27-4e60-82e0-c8e08538c906", 00:08:41.648 "is_configured": true, 00:08:41.648 "data_offset": 2048, 00:08:41.648 "data_size": 63488 00:08:41.648 } 00:08:41.648 ] 00:08:41.648 }' 00:08:41.648 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.648 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.909 [2024-11-26 12:51:59.419670] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.909 "name": "Existed_Raid", 00:08:41.909 "uuid": "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f", 00:08:41.909 "strip_size_kb": 64, 00:08:41.909 "state": "configuring", 00:08:41.909 "raid_level": "concat", 00:08:41.909 "superblock": true, 00:08:41.909 "num_base_bdevs": 3, 00:08:41.909 "num_base_bdevs_discovered": 1, 00:08:41.909 "num_base_bdevs_operational": 3, 00:08:41.909 "base_bdevs_list": [ 00:08:41.909 { 00:08:41.909 "name": "BaseBdev1", 00:08:41.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.909 "is_configured": false, 00:08:41.909 "data_offset": 0, 00:08:41.909 "data_size": 0 00:08:41.909 }, 00:08:41.909 { 00:08:41.909 "name": null, 00:08:41.909 "uuid": "752e579b-0417-419c-9bf3-c131d61ddb93", 00:08:41.909 "is_configured": false, 00:08:41.909 "data_offset": 0, 00:08:41.909 "data_size": 63488 00:08:41.909 }, 00:08:41.909 { 00:08:41.909 "name": "BaseBdev3", 00:08:41.909 "uuid": "7242f2c7-7e27-4e60-82e0-c8e08538c906", 00:08:41.909 "is_configured": true, 00:08:41.909 "data_offset": 2048, 00:08:41.909 "data_size": 63488 00:08:41.909 } 00:08:41.909 ] 00:08:41.909 }' 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.909 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.479 [2024-11-26 12:51:59.913711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.479 BaseBdev1 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.479 [ 00:08:42.479 { 00:08:42.479 "name": "BaseBdev1", 00:08:42.479 "aliases": [ 00:08:42.479 "c46a2667-75db-44df-88d0-c3726d8eac8f" 00:08:42.479 ], 00:08:42.479 "product_name": "Malloc disk", 00:08:42.479 "block_size": 512, 00:08:42.479 "num_blocks": 65536, 00:08:42.479 "uuid": "c46a2667-75db-44df-88d0-c3726d8eac8f", 00:08:42.479 "assigned_rate_limits": { 00:08:42.479 "rw_ios_per_sec": 0, 00:08:42.479 "rw_mbytes_per_sec": 0, 00:08:42.479 "r_mbytes_per_sec": 0, 00:08:42.479 "w_mbytes_per_sec": 0 00:08:42.479 }, 00:08:42.479 "claimed": true, 00:08:42.479 "claim_type": "exclusive_write", 00:08:42.479 "zoned": false, 00:08:42.479 "supported_io_types": { 00:08:42.479 "read": true, 00:08:42.479 "write": true, 00:08:42.479 "unmap": true, 00:08:42.479 "flush": true, 00:08:42.479 "reset": true, 00:08:42.479 "nvme_admin": false, 00:08:42.479 "nvme_io": false, 00:08:42.479 "nvme_io_md": false, 00:08:42.479 "write_zeroes": true, 00:08:42.479 "zcopy": true, 00:08:42.479 "get_zone_info": false, 00:08:42.479 "zone_management": false, 00:08:42.479 "zone_append": false, 00:08:42.479 "compare": false, 00:08:42.479 "compare_and_write": false, 00:08:42.479 "abort": true, 00:08:42.479 "seek_hole": false, 00:08:42.479 "seek_data": false, 00:08:42.479 "copy": true, 00:08:42.479 "nvme_iov_md": false 00:08:42.479 }, 00:08:42.479 "memory_domains": [ 00:08:42.479 { 00:08:42.479 "dma_device_id": "system", 00:08:42.479 "dma_device_type": 1 00:08:42.479 }, 00:08:42.479 { 00:08:42.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.479 "dma_device_type": 2 00:08:42.479 } 00:08:42.479 ], 00:08:42.479 "driver_specific": {} 00:08:42.479 } 00:08:42.479 ] 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.479 "name": "Existed_Raid", 00:08:42.479 "uuid": "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f", 00:08:42.479 "strip_size_kb": 64, 00:08:42.479 "state": "configuring", 00:08:42.479 "raid_level": "concat", 00:08:42.479 "superblock": true, 00:08:42.479 "num_base_bdevs": 3, 00:08:42.479 "num_base_bdevs_discovered": 2, 00:08:42.479 "num_base_bdevs_operational": 3, 00:08:42.479 "base_bdevs_list": [ 00:08:42.479 { 00:08:42.479 "name": "BaseBdev1", 00:08:42.479 "uuid": "c46a2667-75db-44df-88d0-c3726d8eac8f", 00:08:42.479 "is_configured": true, 00:08:42.479 "data_offset": 2048, 00:08:42.479 "data_size": 63488 00:08:42.479 }, 00:08:42.479 { 00:08:42.479 "name": null, 00:08:42.479 "uuid": "752e579b-0417-419c-9bf3-c131d61ddb93", 00:08:42.479 "is_configured": false, 00:08:42.479 "data_offset": 0, 00:08:42.479 "data_size": 63488 00:08:42.479 }, 00:08:42.479 { 00:08:42.479 "name": "BaseBdev3", 00:08:42.479 "uuid": "7242f2c7-7e27-4e60-82e0-c8e08538c906", 00:08:42.479 "is_configured": true, 00:08:42.479 "data_offset": 2048, 00:08:42.479 "data_size": 63488 00:08:42.479 } 00:08:42.479 ] 00:08:42.479 }' 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.479 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.738 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.738 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.738 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.738 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 [2024-11-26 12:52:00.380932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.998 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.998 "name": "Existed_Raid", 00:08:42.998 "uuid": "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f", 00:08:42.998 "strip_size_kb": 64, 00:08:42.998 "state": "configuring", 00:08:42.998 "raid_level": "concat", 00:08:42.998 "superblock": true, 00:08:42.998 "num_base_bdevs": 3, 00:08:42.998 "num_base_bdevs_discovered": 1, 00:08:42.998 "num_base_bdevs_operational": 3, 00:08:42.998 "base_bdevs_list": [ 00:08:42.998 { 00:08:42.998 "name": "BaseBdev1", 00:08:42.998 "uuid": "c46a2667-75db-44df-88d0-c3726d8eac8f", 00:08:42.998 "is_configured": true, 00:08:42.998 "data_offset": 2048, 00:08:42.998 "data_size": 63488 00:08:42.998 }, 00:08:42.998 { 00:08:42.998 "name": null, 00:08:42.998 "uuid": "752e579b-0417-419c-9bf3-c131d61ddb93", 00:08:42.998 "is_configured": false, 00:08:42.998 "data_offset": 0, 00:08:42.998 "data_size": 63488 00:08:42.998 }, 00:08:42.998 { 00:08:42.998 "name": null, 00:08:42.998 "uuid": "7242f2c7-7e27-4e60-82e0-c8e08538c906", 00:08:42.998 "is_configured": false, 00:08:42.998 "data_offset": 0, 00:08:42.998 "data_size": 63488 00:08:42.998 } 00:08:42.998 ] 00:08:42.998 }' 00:08:42.998 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.998 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.257 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.257 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.257 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.257 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.258 [2024-11-26 12:52:00.872132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.258 "name": "Existed_Raid", 00:08:43.258 "uuid": "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f", 00:08:43.258 "strip_size_kb": 64, 00:08:43.258 "state": "configuring", 00:08:43.258 "raid_level": "concat", 00:08:43.258 "superblock": true, 00:08:43.258 "num_base_bdevs": 3, 00:08:43.258 "num_base_bdevs_discovered": 2, 00:08:43.258 "num_base_bdevs_operational": 3, 00:08:43.258 "base_bdevs_list": [ 00:08:43.258 { 00:08:43.258 "name": "BaseBdev1", 00:08:43.258 "uuid": "c46a2667-75db-44df-88d0-c3726d8eac8f", 00:08:43.258 "is_configured": true, 00:08:43.258 "data_offset": 2048, 00:08:43.258 "data_size": 63488 00:08:43.258 }, 00:08:43.258 { 00:08:43.258 "name": null, 00:08:43.258 "uuid": "752e579b-0417-419c-9bf3-c131d61ddb93", 00:08:43.258 "is_configured": false, 00:08:43.258 "data_offset": 0, 00:08:43.258 "data_size": 63488 00:08:43.258 }, 00:08:43.258 { 00:08:43.258 "name": "BaseBdev3", 00:08:43.258 "uuid": "7242f2c7-7e27-4e60-82e0-c8e08538c906", 00:08:43.258 "is_configured": true, 00:08:43.258 "data_offset": 2048, 00:08:43.258 "data_size": 63488 00:08:43.258 } 00:08:43.258 ] 00:08:43.258 }' 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.258 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.826 [2024-11-26 12:52:01.303371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.826 "name": "Existed_Raid", 00:08:43.826 "uuid": "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f", 00:08:43.826 "strip_size_kb": 64, 00:08:43.826 "state": "configuring", 00:08:43.826 "raid_level": "concat", 00:08:43.826 "superblock": true, 00:08:43.826 "num_base_bdevs": 3, 00:08:43.826 "num_base_bdevs_discovered": 1, 00:08:43.826 "num_base_bdevs_operational": 3, 00:08:43.826 "base_bdevs_list": [ 00:08:43.826 { 00:08:43.826 "name": null, 00:08:43.826 "uuid": "c46a2667-75db-44df-88d0-c3726d8eac8f", 00:08:43.826 "is_configured": false, 00:08:43.826 "data_offset": 0, 00:08:43.826 "data_size": 63488 00:08:43.826 }, 00:08:43.826 { 00:08:43.826 "name": null, 00:08:43.826 "uuid": "752e579b-0417-419c-9bf3-c131d61ddb93", 00:08:43.826 "is_configured": false, 00:08:43.826 "data_offset": 0, 00:08:43.826 "data_size": 63488 00:08:43.826 }, 00:08:43.826 { 00:08:43.826 "name": "BaseBdev3", 00:08:43.826 "uuid": "7242f2c7-7e27-4e60-82e0-c8e08538c906", 00:08:43.826 "is_configured": true, 00:08:43.826 "data_offset": 2048, 00:08:43.826 "data_size": 63488 00:08:43.826 } 00:08:43.826 ] 00:08:43.826 }' 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.826 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.085 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:44.085 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.085 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.085 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.085 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.345 [2024-11-26 12:52:01.781051] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.345 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.346 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.346 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.346 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.346 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.346 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.346 "name": "Existed_Raid", 00:08:44.346 "uuid": "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f", 00:08:44.346 "strip_size_kb": 64, 00:08:44.346 "state": "configuring", 00:08:44.346 "raid_level": "concat", 00:08:44.346 "superblock": true, 00:08:44.346 "num_base_bdevs": 3, 00:08:44.346 "num_base_bdevs_discovered": 2, 00:08:44.346 "num_base_bdevs_operational": 3, 00:08:44.346 "base_bdevs_list": [ 00:08:44.346 { 00:08:44.346 "name": null, 00:08:44.346 "uuid": "c46a2667-75db-44df-88d0-c3726d8eac8f", 00:08:44.346 "is_configured": false, 00:08:44.346 "data_offset": 0, 00:08:44.346 "data_size": 63488 00:08:44.346 }, 00:08:44.346 { 00:08:44.346 "name": "BaseBdev2", 00:08:44.346 "uuid": "752e579b-0417-419c-9bf3-c131d61ddb93", 00:08:44.346 "is_configured": true, 00:08:44.346 "data_offset": 2048, 00:08:44.346 "data_size": 63488 00:08:44.346 }, 00:08:44.346 { 00:08:44.346 "name": "BaseBdev3", 00:08:44.346 "uuid": "7242f2c7-7e27-4e60-82e0-c8e08538c906", 00:08:44.346 "is_configured": true, 00:08:44.346 "data_offset": 2048, 00:08:44.346 "data_size": 63488 00:08:44.346 } 00:08:44.346 ] 00:08:44.346 }' 00:08:44.346 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.346 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c46a2667-75db-44df-88d0-c3726d8eac8f 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.606 [2024-11-26 12:52:02.274946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:44.606 [2024-11-26 12:52:02.275218] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:44.606 [2024-11-26 12:52:02.275260] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:44.606 [2024-11-26 12:52:02.275546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:44.606 [2024-11-26 12:52:02.275700] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:44.606 [2024-11-26 12:52:02.275740] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:44.606 NewBaseBdev 00:08:44.606 [2024-11-26 12:52:02.275883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.606 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.866 [ 00:08:44.866 { 00:08:44.866 "name": "NewBaseBdev", 00:08:44.866 "aliases": [ 00:08:44.866 "c46a2667-75db-44df-88d0-c3726d8eac8f" 00:08:44.866 ], 00:08:44.866 "product_name": "Malloc disk", 00:08:44.866 "block_size": 512, 00:08:44.866 "num_blocks": 65536, 00:08:44.866 "uuid": "c46a2667-75db-44df-88d0-c3726d8eac8f", 00:08:44.866 "assigned_rate_limits": { 00:08:44.866 "rw_ios_per_sec": 0, 00:08:44.866 "rw_mbytes_per_sec": 0, 00:08:44.866 "r_mbytes_per_sec": 0, 00:08:44.866 "w_mbytes_per_sec": 0 00:08:44.866 }, 00:08:44.866 "claimed": true, 00:08:44.866 "claim_type": "exclusive_write", 00:08:44.866 "zoned": false, 00:08:44.866 "supported_io_types": { 00:08:44.866 "read": true, 00:08:44.866 "write": true, 00:08:44.866 "unmap": true, 00:08:44.866 "flush": true, 00:08:44.866 "reset": true, 00:08:44.866 "nvme_admin": false, 00:08:44.866 "nvme_io": false, 00:08:44.866 "nvme_io_md": false, 00:08:44.866 "write_zeroes": true, 00:08:44.866 "zcopy": true, 00:08:44.866 "get_zone_info": false, 00:08:44.866 "zone_management": false, 00:08:44.866 "zone_append": false, 00:08:44.866 "compare": false, 00:08:44.866 "compare_and_write": false, 00:08:44.866 "abort": true, 00:08:44.866 "seek_hole": false, 00:08:44.866 "seek_data": false, 00:08:44.866 "copy": true, 00:08:44.866 "nvme_iov_md": false 00:08:44.866 }, 00:08:44.866 "memory_domains": [ 00:08:44.866 { 00:08:44.866 "dma_device_id": "system", 00:08:44.866 "dma_device_type": 1 00:08:44.866 }, 00:08:44.866 { 00:08:44.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.866 "dma_device_type": 2 00:08:44.866 } 00:08:44.866 ], 00:08:44.866 "driver_specific": {} 00:08:44.866 } 00:08:44.866 ] 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.866 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.867 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.867 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.867 "name": "Existed_Raid", 00:08:44.867 "uuid": "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f", 00:08:44.867 "strip_size_kb": 64, 00:08:44.867 "state": "online", 00:08:44.867 "raid_level": "concat", 00:08:44.867 "superblock": true, 00:08:44.867 "num_base_bdevs": 3, 00:08:44.867 "num_base_bdevs_discovered": 3, 00:08:44.867 "num_base_bdevs_operational": 3, 00:08:44.867 "base_bdevs_list": [ 00:08:44.867 { 00:08:44.867 "name": "NewBaseBdev", 00:08:44.867 "uuid": "c46a2667-75db-44df-88d0-c3726d8eac8f", 00:08:44.867 "is_configured": true, 00:08:44.867 "data_offset": 2048, 00:08:44.867 "data_size": 63488 00:08:44.867 }, 00:08:44.867 { 00:08:44.867 "name": "BaseBdev2", 00:08:44.867 "uuid": "752e579b-0417-419c-9bf3-c131d61ddb93", 00:08:44.867 "is_configured": true, 00:08:44.867 "data_offset": 2048, 00:08:44.867 "data_size": 63488 00:08:44.867 }, 00:08:44.867 { 00:08:44.867 "name": "BaseBdev3", 00:08:44.867 "uuid": "7242f2c7-7e27-4e60-82e0-c8e08538c906", 00:08:44.867 "is_configured": true, 00:08:44.867 "data_offset": 2048, 00:08:44.867 "data_size": 63488 00:08:44.867 } 00:08:44.867 ] 00:08:44.867 }' 00:08:44.867 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.867 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.127 [2024-11-26 12:52:02.770399] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.127 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.127 "name": "Existed_Raid", 00:08:45.127 "aliases": [ 00:08:45.127 "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f" 00:08:45.127 ], 00:08:45.127 "product_name": "Raid Volume", 00:08:45.127 "block_size": 512, 00:08:45.127 "num_blocks": 190464, 00:08:45.127 "uuid": "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f", 00:08:45.127 "assigned_rate_limits": { 00:08:45.127 "rw_ios_per_sec": 0, 00:08:45.127 "rw_mbytes_per_sec": 0, 00:08:45.127 "r_mbytes_per_sec": 0, 00:08:45.127 "w_mbytes_per_sec": 0 00:08:45.127 }, 00:08:45.127 "claimed": false, 00:08:45.127 "zoned": false, 00:08:45.127 "supported_io_types": { 00:08:45.127 "read": true, 00:08:45.127 "write": true, 00:08:45.127 "unmap": true, 00:08:45.127 "flush": true, 00:08:45.127 "reset": true, 00:08:45.127 "nvme_admin": false, 00:08:45.127 "nvme_io": false, 00:08:45.127 "nvme_io_md": false, 00:08:45.127 "write_zeroes": true, 00:08:45.127 "zcopy": false, 00:08:45.127 "get_zone_info": false, 00:08:45.127 "zone_management": false, 00:08:45.127 "zone_append": false, 00:08:45.127 "compare": false, 00:08:45.127 "compare_and_write": false, 00:08:45.127 "abort": false, 00:08:45.127 "seek_hole": false, 00:08:45.127 "seek_data": false, 00:08:45.127 "copy": false, 00:08:45.127 "nvme_iov_md": false 00:08:45.127 }, 00:08:45.127 "memory_domains": [ 00:08:45.127 { 00:08:45.127 "dma_device_id": "system", 00:08:45.127 "dma_device_type": 1 00:08:45.127 }, 00:08:45.127 { 00:08:45.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.127 "dma_device_type": 2 00:08:45.127 }, 00:08:45.127 { 00:08:45.127 "dma_device_id": "system", 00:08:45.127 "dma_device_type": 1 00:08:45.127 }, 00:08:45.127 { 00:08:45.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.127 "dma_device_type": 2 00:08:45.127 }, 00:08:45.127 { 00:08:45.127 "dma_device_id": "system", 00:08:45.127 "dma_device_type": 1 00:08:45.127 }, 00:08:45.127 { 00:08:45.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.128 "dma_device_type": 2 00:08:45.128 } 00:08:45.128 ], 00:08:45.128 "driver_specific": { 00:08:45.128 "raid": { 00:08:45.128 "uuid": "9e24c9e8-32a2-47fa-9a9a-bca3acb26d9f", 00:08:45.128 "strip_size_kb": 64, 00:08:45.128 "state": "online", 00:08:45.128 "raid_level": "concat", 00:08:45.128 "superblock": true, 00:08:45.128 "num_base_bdevs": 3, 00:08:45.128 "num_base_bdevs_discovered": 3, 00:08:45.128 "num_base_bdevs_operational": 3, 00:08:45.128 "base_bdevs_list": [ 00:08:45.128 { 00:08:45.128 "name": "NewBaseBdev", 00:08:45.128 "uuid": "c46a2667-75db-44df-88d0-c3726d8eac8f", 00:08:45.128 "is_configured": true, 00:08:45.128 "data_offset": 2048, 00:08:45.128 "data_size": 63488 00:08:45.128 }, 00:08:45.128 { 00:08:45.128 "name": "BaseBdev2", 00:08:45.128 "uuid": "752e579b-0417-419c-9bf3-c131d61ddb93", 00:08:45.128 "is_configured": true, 00:08:45.128 "data_offset": 2048, 00:08:45.128 "data_size": 63488 00:08:45.128 }, 00:08:45.128 { 00:08:45.128 "name": "BaseBdev3", 00:08:45.128 "uuid": "7242f2c7-7e27-4e60-82e0-c8e08538c906", 00:08:45.128 "is_configured": true, 00:08:45.128 "data_offset": 2048, 00:08:45.128 "data_size": 63488 00:08:45.128 } 00:08:45.128 ] 00:08:45.128 } 00:08:45.128 } 00:08:45.128 }' 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:45.388 BaseBdev2 00:08:45.388 BaseBdev3' 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.388 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.388 [2024-11-26 12:52:03.013734] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.388 [2024-11-26 12:52:03.013759] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.388 [2024-11-26 12:52:03.013821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.388 [2024-11-26 12:52:03.013872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.388 [2024-11-26 12:52:03.013884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77572 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77572 ']' 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77572 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77572 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77572' 00:08:45.388 killing process with pid 77572 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77572 00:08:45.388 [2024-11-26 12:52:03.063459] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.388 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77572 00:08:45.648 [2024-11-26 12:52:03.094123] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.907 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:45.907 00:08:45.907 real 0m8.627s 00:08:45.907 user 0m14.615s 00:08:45.907 sys 0m1.805s 00:08:45.907 ************************************ 00:08:45.907 END TEST raid_state_function_test_sb 00:08:45.907 ************************************ 00:08:45.907 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.907 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.907 12:52:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:45.907 12:52:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:45.907 12:52:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.907 12:52:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.907 ************************************ 00:08:45.907 START TEST raid_superblock_test 00:08:45.907 ************************************ 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78170 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78170 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78170 ']' 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.907 12:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.907 [2024-11-26 12:52:03.518169] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:45.907 [2024-11-26 12:52:03.518402] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78170 ] 00:08:46.166 [2024-11-26 12:52:03.681837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.166 [2024-11-26 12:52:03.726921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.166 [2024-11-26 12:52:03.769947] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.166 [2024-11-26 12:52:03.769985] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.758 malloc1 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.758 [2024-11-26 12:52:04.367975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:46.758 [2024-11-26 12:52:04.368110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.758 [2024-11-26 12:52:04.368164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:46.758 [2024-11-26 12:52:04.368222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.758 [2024-11-26 12:52:04.370341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.758 [2024-11-26 12:52:04.370415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:46.758 pt1 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:46.758 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.759 malloc2 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.759 [2024-11-26 12:52:04.412378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:46.759 [2024-11-26 12:52:04.412621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.759 [2024-11-26 12:52:04.412725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:46.759 [2024-11-26 12:52:04.412828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.759 [2024-11-26 12:52:04.418689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.759 [2024-11-26 12:52:04.418891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:46.759 pt2 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.759 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.078 malloc3 00:08:47.078 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.078 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:47.078 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.078 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.078 [2024-11-26 12:52:04.449698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:47.078 [2024-11-26 12:52:04.449786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.078 [2024-11-26 12:52:04.449837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:47.078 [2024-11-26 12:52:04.449870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.078 [2024-11-26 12:52:04.451956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.078 [2024-11-26 12:52:04.452030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:47.078 pt3 00:08:47.078 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.078 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:47.078 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.078 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:47.078 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.078 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.078 [2024-11-26 12:52:04.461723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.078 [2024-11-26 12:52:04.463616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.079 [2024-11-26 12:52:04.463722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:47.079 [2024-11-26 12:52:04.463885] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:47.079 [2024-11-26 12:52:04.463929] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:47.079 [2024-11-26 12:52:04.464206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:47.079 [2024-11-26 12:52:04.464367] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:47.079 [2024-11-26 12:52:04.464412] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:47.079 [2024-11-26 12:52:04.464572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.079 "name": "raid_bdev1", 00:08:47.079 "uuid": "8635c2b8-0801-4459-918f-90d29077de98", 00:08:47.079 "strip_size_kb": 64, 00:08:47.079 "state": "online", 00:08:47.079 "raid_level": "concat", 00:08:47.079 "superblock": true, 00:08:47.079 "num_base_bdevs": 3, 00:08:47.079 "num_base_bdevs_discovered": 3, 00:08:47.079 "num_base_bdevs_operational": 3, 00:08:47.079 "base_bdevs_list": [ 00:08:47.079 { 00:08:47.079 "name": "pt1", 00:08:47.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.079 "is_configured": true, 00:08:47.079 "data_offset": 2048, 00:08:47.079 "data_size": 63488 00:08:47.079 }, 00:08:47.079 { 00:08:47.079 "name": "pt2", 00:08:47.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.079 "is_configured": true, 00:08:47.079 "data_offset": 2048, 00:08:47.079 "data_size": 63488 00:08:47.079 }, 00:08:47.079 { 00:08:47.079 "name": "pt3", 00:08:47.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.079 "is_configured": true, 00:08:47.079 "data_offset": 2048, 00:08:47.079 "data_size": 63488 00:08:47.079 } 00:08:47.079 ] 00:08:47.079 }' 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.079 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.337 [2024-11-26 12:52:04.937156] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.337 "name": "raid_bdev1", 00:08:47.337 "aliases": [ 00:08:47.337 "8635c2b8-0801-4459-918f-90d29077de98" 00:08:47.337 ], 00:08:47.337 "product_name": "Raid Volume", 00:08:47.337 "block_size": 512, 00:08:47.337 "num_blocks": 190464, 00:08:47.337 "uuid": "8635c2b8-0801-4459-918f-90d29077de98", 00:08:47.337 "assigned_rate_limits": { 00:08:47.337 "rw_ios_per_sec": 0, 00:08:47.337 "rw_mbytes_per_sec": 0, 00:08:47.337 "r_mbytes_per_sec": 0, 00:08:47.337 "w_mbytes_per_sec": 0 00:08:47.337 }, 00:08:47.337 "claimed": false, 00:08:47.337 "zoned": false, 00:08:47.337 "supported_io_types": { 00:08:47.337 "read": true, 00:08:47.337 "write": true, 00:08:47.337 "unmap": true, 00:08:47.337 "flush": true, 00:08:47.337 "reset": true, 00:08:47.337 "nvme_admin": false, 00:08:47.337 "nvme_io": false, 00:08:47.337 "nvme_io_md": false, 00:08:47.337 "write_zeroes": true, 00:08:47.337 "zcopy": false, 00:08:47.337 "get_zone_info": false, 00:08:47.337 "zone_management": false, 00:08:47.337 "zone_append": false, 00:08:47.337 "compare": false, 00:08:47.337 "compare_and_write": false, 00:08:47.337 "abort": false, 00:08:47.337 "seek_hole": false, 00:08:47.337 "seek_data": false, 00:08:47.337 "copy": false, 00:08:47.337 "nvme_iov_md": false 00:08:47.337 }, 00:08:47.337 "memory_domains": [ 00:08:47.337 { 00:08:47.337 "dma_device_id": "system", 00:08:47.337 "dma_device_type": 1 00:08:47.337 }, 00:08:47.337 { 00:08:47.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.337 "dma_device_type": 2 00:08:47.337 }, 00:08:47.337 { 00:08:47.337 "dma_device_id": "system", 00:08:47.337 "dma_device_type": 1 00:08:47.337 }, 00:08:47.337 { 00:08:47.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.337 "dma_device_type": 2 00:08:47.337 }, 00:08:47.337 { 00:08:47.337 "dma_device_id": "system", 00:08:47.337 "dma_device_type": 1 00:08:47.337 }, 00:08:47.337 { 00:08:47.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.337 "dma_device_type": 2 00:08:47.337 } 00:08:47.337 ], 00:08:47.337 "driver_specific": { 00:08:47.337 "raid": { 00:08:47.337 "uuid": "8635c2b8-0801-4459-918f-90d29077de98", 00:08:47.337 "strip_size_kb": 64, 00:08:47.337 "state": "online", 00:08:47.337 "raid_level": "concat", 00:08:47.337 "superblock": true, 00:08:47.337 "num_base_bdevs": 3, 00:08:47.337 "num_base_bdevs_discovered": 3, 00:08:47.337 "num_base_bdevs_operational": 3, 00:08:47.337 "base_bdevs_list": [ 00:08:47.337 { 00:08:47.337 "name": "pt1", 00:08:47.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.337 "is_configured": true, 00:08:47.337 "data_offset": 2048, 00:08:47.337 "data_size": 63488 00:08:47.337 }, 00:08:47.337 { 00:08:47.337 "name": "pt2", 00:08:47.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.337 "is_configured": true, 00:08:47.337 "data_offset": 2048, 00:08:47.337 "data_size": 63488 00:08:47.337 }, 00:08:47.337 { 00:08:47.337 "name": "pt3", 00:08:47.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.337 "is_configured": true, 00:08:47.337 "data_offset": 2048, 00:08:47.337 "data_size": 63488 00:08:47.337 } 00:08:47.337 ] 00:08:47.337 } 00:08:47.337 } 00:08:47.337 }' 00:08:47.337 12:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.337 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:47.337 pt2 00:08:47.337 pt3' 00:08:47.337 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.596 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.596 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:47.597 [2024-11-26 12:52:05.220646] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8635c2b8-0801-4459-918f-90d29077de98 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8635c2b8-0801-4459-918f-90d29077de98 ']' 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.597 [2024-11-26 12:52:05.268307] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.597 [2024-11-26 12:52:05.268379] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.597 [2024-11-26 12:52:05.268464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.597 [2024-11-26 12:52:05.268531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.597 [2024-11-26 12:52:05.268547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:47.597 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.857 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.858 [2024-11-26 12:52:05.408091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:47.858 [2024-11-26 12:52:05.409936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:47.858 [2024-11-26 12:52:05.409980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:47.858 [2024-11-26 12:52:05.410023] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:47.858 [2024-11-26 12:52:05.410072] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:47.858 [2024-11-26 12:52:05.410108] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:47.858 [2024-11-26 12:52:05.410119] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.858 [2024-11-26 12:52:05.410129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:47.858 request: 00:08:47.858 { 00:08:47.858 "name": "raid_bdev1", 00:08:47.858 "raid_level": "concat", 00:08:47.858 "base_bdevs": [ 00:08:47.858 "malloc1", 00:08:47.858 "malloc2", 00:08:47.858 "malloc3" 00:08:47.858 ], 00:08:47.858 "strip_size_kb": 64, 00:08:47.858 "superblock": false, 00:08:47.858 "method": "bdev_raid_create", 00:08:47.858 "req_id": 1 00:08:47.858 } 00:08:47.858 Got JSON-RPC error response 00:08:47.858 response: 00:08:47.858 { 00:08:47.858 "code": -17, 00:08:47.858 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:47.858 } 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.858 [2024-11-26 12:52:05.475949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:47.858 [2024-11-26 12:52:05.475997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.858 [2024-11-26 12:52:05.476011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:47.858 [2024-11-26 12:52:05.476022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.858 [2024-11-26 12:52:05.478065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.858 [2024-11-26 12:52:05.478104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:47.858 [2024-11-26 12:52:05.478163] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:47.858 [2024-11-26 12:52:05.478214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.858 pt1 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.858 "name": "raid_bdev1", 00:08:47.858 "uuid": "8635c2b8-0801-4459-918f-90d29077de98", 00:08:47.858 "strip_size_kb": 64, 00:08:47.858 "state": "configuring", 00:08:47.858 "raid_level": "concat", 00:08:47.858 "superblock": true, 00:08:47.858 "num_base_bdevs": 3, 00:08:47.858 "num_base_bdevs_discovered": 1, 00:08:47.858 "num_base_bdevs_operational": 3, 00:08:47.858 "base_bdevs_list": [ 00:08:47.858 { 00:08:47.858 "name": "pt1", 00:08:47.858 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.858 "is_configured": true, 00:08:47.858 "data_offset": 2048, 00:08:47.858 "data_size": 63488 00:08:47.858 }, 00:08:47.858 { 00:08:47.858 "name": null, 00:08:47.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.858 "is_configured": false, 00:08:47.858 "data_offset": 2048, 00:08:47.858 "data_size": 63488 00:08:47.858 }, 00:08:47.858 { 00:08:47.858 "name": null, 00:08:47.858 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.858 "is_configured": false, 00:08:47.858 "data_offset": 2048, 00:08:47.858 "data_size": 63488 00:08:47.858 } 00:08:47.858 ] 00:08:47.858 }' 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.858 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.427 [2024-11-26 12:52:05.915302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:48.427 [2024-11-26 12:52:05.915397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.427 [2024-11-26 12:52:05.915433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:48.427 [2024-11-26 12:52:05.915464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.427 [2024-11-26 12:52:05.915815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.427 [2024-11-26 12:52:05.915873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:48.427 [2024-11-26 12:52:05.915958] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:48.427 [2024-11-26 12:52:05.916007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:48.427 pt2 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.427 [2024-11-26 12:52:05.927310] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.427 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.427 "name": "raid_bdev1", 00:08:48.427 "uuid": "8635c2b8-0801-4459-918f-90d29077de98", 00:08:48.427 "strip_size_kb": 64, 00:08:48.427 "state": "configuring", 00:08:48.427 "raid_level": "concat", 00:08:48.427 "superblock": true, 00:08:48.427 "num_base_bdevs": 3, 00:08:48.427 "num_base_bdevs_discovered": 1, 00:08:48.427 "num_base_bdevs_operational": 3, 00:08:48.427 "base_bdevs_list": [ 00:08:48.427 { 00:08:48.427 "name": "pt1", 00:08:48.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.427 "is_configured": true, 00:08:48.427 "data_offset": 2048, 00:08:48.427 "data_size": 63488 00:08:48.427 }, 00:08:48.428 { 00:08:48.428 "name": null, 00:08:48.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.428 "is_configured": false, 00:08:48.428 "data_offset": 0, 00:08:48.428 "data_size": 63488 00:08:48.428 }, 00:08:48.428 { 00:08:48.428 "name": null, 00:08:48.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.428 "is_configured": false, 00:08:48.428 "data_offset": 2048, 00:08:48.428 "data_size": 63488 00:08:48.428 } 00:08:48.428 ] 00:08:48.428 }' 00:08:48.428 12:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.428 12:52:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.997 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.998 [2024-11-26 12:52:06.402561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:48.998 [2024-11-26 12:52:06.402663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.998 [2024-11-26 12:52:06.402701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:48.998 [2024-11-26 12:52:06.402710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.998 [2024-11-26 12:52:06.403056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.998 [2024-11-26 12:52:06.403071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:48.998 [2024-11-26 12:52:06.403134] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:48.998 [2024-11-26 12:52:06.403163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:48.998 pt2 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.998 [2024-11-26 12:52:06.410529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:48.998 [2024-11-26 12:52:06.410572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.998 [2024-11-26 12:52:06.410589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:48.998 [2024-11-26 12:52:06.410597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.998 [2024-11-26 12:52:06.410906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.998 [2024-11-26 12:52:06.410921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:48.998 [2024-11-26 12:52:06.410970] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:48.998 [2024-11-26 12:52:06.410984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:48.998 [2024-11-26 12:52:06.411067] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:48.998 [2024-11-26 12:52:06.411075] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.998 [2024-11-26 12:52:06.411326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:48.998 [2024-11-26 12:52:06.411426] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:48.998 [2024-11-26 12:52:06.411437] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:48.998 [2024-11-26 12:52:06.411525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.998 pt3 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.998 "name": "raid_bdev1", 00:08:48.998 "uuid": "8635c2b8-0801-4459-918f-90d29077de98", 00:08:48.998 "strip_size_kb": 64, 00:08:48.998 "state": "online", 00:08:48.998 "raid_level": "concat", 00:08:48.998 "superblock": true, 00:08:48.998 "num_base_bdevs": 3, 00:08:48.998 "num_base_bdevs_discovered": 3, 00:08:48.998 "num_base_bdevs_operational": 3, 00:08:48.998 "base_bdevs_list": [ 00:08:48.998 { 00:08:48.998 "name": "pt1", 00:08:48.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.998 "is_configured": true, 00:08:48.998 "data_offset": 2048, 00:08:48.998 "data_size": 63488 00:08:48.998 }, 00:08:48.998 { 00:08:48.998 "name": "pt2", 00:08:48.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.998 "is_configured": true, 00:08:48.998 "data_offset": 2048, 00:08:48.998 "data_size": 63488 00:08:48.998 }, 00:08:48.998 { 00:08:48.998 "name": "pt3", 00:08:48.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.998 "is_configured": true, 00:08:48.998 "data_offset": 2048, 00:08:48.998 "data_size": 63488 00:08:48.998 } 00:08:48.998 ] 00:08:48.998 }' 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.998 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.258 [2024-11-26 12:52:06.866014] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.258 "name": "raid_bdev1", 00:08:49.258 "aliases": [ 00:08:49.258 "8635c2b8-0801-4459-918f-90d29077de98" 00:08:49.258 ], 00:08:49.258 "product_name": "Raid Volume", 00:08:49.258 "block_size": 512, 00:08:49.258 "num_blocks": 190464, 00:08:49.258 "uuid": "8635c2b8-0801-4459-918f-90d29077de98", 00:08:49.258 "assigned_rate_limits": { 00:08:49.258 "rw_ios_per_sec": 0, 00:08:49.258 "rw_mbytes_per_sec": 0, 00:08:49.258 "r_mbytes_per_sec": 0, 00:08:49.258 "w_mbytes_per_sec": 0 00:08:49.258 }, 00:08:49.258 "claimed": false, 00:08:49.258 "zoned": false, 00:08:49.258 "supported_io_types": { 00:08:49.258 "read": true, 00:08:49.258 "write": true, 00:08:49.258 "unmap": true, 00:08:49.258 "flush": true, 00:08:49.258 "reset": true, 00:08:49.258 "nvme_admin": false, 00:08:49.258 "nvme_io": false, 00:08:49.258 "nvme_io_md": false, 00:08:49.258 "write_zeroes": true, 00:08:49.258 "zcopy": false, 00:08:49.258 "get_zone_info": false, 00:08:49.258 "zone_management": false, 00:08:49.258 "zone_append": false, 00:08:49.258 "compare": false, 00:08:49.258 "compare_and_write": false, 00:08:49.258 "abort": false, 00:08:49.258 "seek_hole": false, 00:08:49.258 "seek_data": false, 00:08:49.258 "copy": false, 00:08:49.258 "nvme_iov_md": false 00:08:49.258 }, 00:08:49.258 "memory_domains": [ 00:08:49.258 { 00:08:49.258 "dma_device_id": "system", 00:08:49.258 "dma_device_type": 1 00:08:49.258 }, 00:08:49.258 { 00:08:49.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.258 "dma_device_type": 2 00:08:49.258 }, 00:08:49.258 { 00:08:49.258 "dma_device_id": "system", 00:08:49.258 "dma_device_type": 1 00:08:49.258 }, 00:08:49.258 { 00:08:49.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.258 "dma_device_type": 2 00:08:49.258 }, 00:08:49.258 { 00:08:49.258 "dma_device_id": "system", 00:08:49.258 "dma_device_type": 1 00:08:49.258 }, 00:08:49.258 { 00:08:49.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.258 "dma_device_type": 2 00:08:49.258 } 00:08:49.258 ], 00:08:49.258 "driver_specific": { 00:08:49.258 "raid": { 00:08:49.258 "uuid": "8635c2b8-0801-4459-918f-90d29077de98", 00:08:49.258 "strip_size_kb": 64, 00:08:49.258 "state": "online", 00:08:49.258 "raid_level": "concat", 00:08:49.258 "superblock": true, 00:08:49.258 "num_base_bdevs": 3, 00:08:49.258 "num_base_bdevs_discovered": 3, 00:08:49.258 "num_base_bdevs_operational": 3, 00:08:49.258 "base_bdevs_list": [ 00:08:49.258 { 00:08:49.258 "name": "pt1", 00:08:49.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.258 "is_configured": true, 00:08:49.258 "data_offset": 2048, 00:08:49.258 "data_size": 63488 00:08:49.258 }, 00:08:49.258 { 00:08:49.258 "name": "pt2", 00:08:49.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.258 "is_configured": true, 00:08:49.258 "data_offset": 2048, 00:08:49.258 "data_size": 63488 00:08:49.258 }, 00:08:49.258 { 00:08:49.258 "name": "pt3", 00:08:49.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:49.258 "is_configured": true, 00:08:49.258 "data_offset": 2048, 00:08:49.258 "data_size": 63488 00:08:49.258 } 00:08:49.258 ] 00:08:49.258 } 00:08:49.258 } 00:08:49.258 }' 00:08:49.258 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.518 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:49.518 pt2 00:08:49.518 pt3' 00:08:49.518 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.518 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.518 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.518 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.518 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:49.518 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.518 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:49.518 [2024-11-26 12:52:07.129562] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8635c2b8-0801-4459-918f-90d29077de98 '!=' 8635c2b8-0801-4459-918f-90d29077de98 ']' 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78170 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78170 ']' 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78170 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.518 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78170 00:08:49.778 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.778 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.778 killing process with pid 78170 00:08:49.778 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78170' 00:08:49.778 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 78170 00:08:49.778 [2024-11-26 12:52:07.216992] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.778 [2024-11-26 12:52:07.217074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.778 [2024-11-26 12:52:07.217141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.778 [2024-11-26 12:52:07.217151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:49.778 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 78170 00:08:49.778 [2024-11-26 12:52:07.249640] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.039 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:50.039 00:08:50.039 real 0m4.076s 00:08:50.039 user 0m6.344s 00:08:50.039 sys 0m0.911s 00:08:50.039 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.039 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.039 ************************************ 00:08:50.039 END TEST raid_superblock_test 00:08:50.039 ************************************ 00:08:50.039 12:52:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:50.039 12:52:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:50.039 12:52:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.039 12:52:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.039 ************************************ 00:08:50.039 START TEST raid_read_error_test 00:08:50.039 ************************************ 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DTGPsNbnuV 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78412 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78412 00:08:50.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78412 ']' 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.039 12:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.039 [2024-11-26 12:52:07.680608] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:50.039 [2024-11-26 12:52:07.680718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78412 ] 00:08:50.300 [2024-11-26 12:52:07.839827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.300 [2024-11-26 12:52:07.920818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.560 [2024-11-26 12:52:07.999834] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.560 [2024-11-26 12:52:07.999881] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.131 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.131 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:51.131 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.131 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:51.131 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 BaseBdev1_malloc 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 true 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 [2024-11-26 12:52:08.564467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:51.132 [2024-11-26 12:52:08.564529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.132 [2024-11-26 12:52:08.564549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:51.132 [2024-11-26 12:52:08.564559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.132 [2024-11-26 12:52:08.567002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.132 [2024-11-26 12:52:08.567037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:51.132 BaseBdev1 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 BaseBdev2_malloc 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 true 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 [2024-11-26 12:52:08.621043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:51.132 [2024-11-26 12:52:08.621099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.132 [2024-11-26 12:52:08.621118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:51.132 [2024-11-26 12:52:08.621127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.132 [2024-11-26 12:52:08.623549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.132 [2024-11-26 12:52:08.623655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:51.132 BaseBdev2 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 BaseBdev3_malloc 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 true 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 [2024-11-26 12:52:08.667652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:51.132 [2024-11-26 12:52:08.667704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.132 [2024-11-26 12:52:08.667725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:51.132 [2024-11-26 12:52:08.667734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.132 [2024-11-26 12:52:08.670148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.132 [2024-11-26 12:52:08.670275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:51.132 BaseBdev3 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 [2024-11-26 12:52:08.679706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.132 [2024-11-26 12:52:08.681830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.132 [2024-11-26 12:52:08.681958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.132 [2024-11-26 12:52:08.682158] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:51.132 [2024-11-26 12:52:08.682190] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:51.132 [2024-11-26 12:52:08.682446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:51.132 [2024-11-26 12:52:08.682583] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:51.132 [2024-11-26 12:52:08.682593] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:51.132 [2024-11-26 12:52:08.682718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.132 "name": "raid_bdev1", 00:08:51.132 "uuid": "8a17d1c6-3340-44a2-8c7f-329b2ed22765", 00:08:51.132 "strip_size_kb": 64, 00:08:51.132 "state": "online", 00:08:51.132 "raid_level": "concat", 00:08:51.132 "superblock": true, 00:08:51.132 "num_base_bdevs": 3, 00:08:51.132 "num_base_bdevs_discovered": 3, 00:08:51.132 "num_base_bdevs_operational": 3, 00:08:51.132 "base_bdevs_list": [ 00:08:51.132 { 00:08:51.132 "name": "BaseBdev1", 00:08:51.132 "uuid": "b3773d40-0752-50ca-9497-e248ddb0a5d9", 00:08:51.132 "is_configured": true, 00:08:51.132 "data_offset": 2048, 00:08:51.132 "data_size": 63488 00:08:51.132 }, 00:08:51.132 { 00:08:51.132 "name": "BaseBdev2", 00:08:51.132 "uuid": "19d5c3c0-e8ca-5555-b737-4d3046943909", 00:08:51.132 "is_configured": true, 00:08:51.132 "data_offset": 2048, 00:08:51.132 "data_size": 63488 00:08:51.132 }, 00:08:51.132 { 00:08:51.132 "name": "BaseBdev3", 00:08:51.132 "uuid": "0bb290d0-fd2d-5484-aa2f-d4c4f21cecc1", 00:08:51.132 "is_configured": true, 00:08:51.132 "data_offset": 2048, 00:08:51.132 "data_size": 63488 00:08:51.132 } 00:08:51.132 ] 00:08:51.132 }' 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.132 12:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.715 12:52:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:51.715 12:52:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:51.716 [2024-11-26 12:52:09.247225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.667 "name": "raid_bdev1", 00:08:52.667 "uuid": "8a17d1c6-3340-44a2-8c7f-329b2ed22765", 00:08:52.667 "strip_size_kb": 64, 00:08:52.667 "state": "online", 00:08:52.667 "raid_level": "concat", 00:08:52.667 "superblock": true, 00:08:52.667 "num_base_bdevs": 3, 00:08:52.667 "num_base_bdevs_discovered": 3, 00:08:52.667 "num_base_bdevs_operational": 3, 00:08:52.667 "base_bdevs_list": [ 00:08:52.667 { 00:08:52.667 "name": "BaseBdev1", 00:08:52.667 "uuid": "b3773d40-0752-50ca-9497-e248ddb0a5d9", 00:08:52.667 "is_configured": true, 00:08:52.667 "data_offset": 2048, 00:08:52.667 "data_size": 63488 00:08:52.667 }, 00:08:52.667 { 00:08:52.667 "name": "BaseBdev2", 00:08:52.667 "uuid": "19d5c3c0-e8ca-5555-b737-4d3046943909", 00:08:52.667 "is_configured": true, 00:08:52.667 "data_offset": 2048, 00:08:52.667 "data_size": 63488 00:08:52.667 }, 00:08:52.667 { 00:08:52.667 "name": "BaseBdev3", 00:08:52.667 "uuid": "0bb290d0-fd2d-5484-aa2f-d4c4f21cecc1", 00:08:52.667 "is_configured": true, 00:08:52.667 "data_offset": 2048, 00:08:52.667 "data_size": 63488 00:08:52.667 } 00:08:52.667 ] 00:08:52.667 }' 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.667 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.237 [2024-11-26 12:52:10.627709] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.237 [2024-11-26 12:52:10.627851] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.237 [2024-11-26 12:52:10.630366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.237 [2024-11-26 12:52:10.630429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.237 [2024-11-26 12:52:10.630470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.237 [2024-11-26 12:52:10.630485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:53.237 { 00:08:53.237 "results": [ 00:08:53.237 { 00:08:53.237 "job": "raid_bdev1", 00:08:53.237 "core_mask": "0x1", 00:08:53.237 "workload": "randrw", 00:08:53.237 "percentage": 50, 00:08:53.237 "status": "finished", 00:08:53.237 "queue_depth": 1, 00:08:53.237 "io_size": 131072, 00:08:53.237 "runtime": 1.381082, 00:08:53.237 "iops": 15080.205230391823, 00:08:53.237 "mibps": 1885.0256537989778, 00:08:53.237 "io_failed": 1, 00:08:53.237 "io_timeout": 0, 00:08:53.237 "avg_latency_us": 93.11691298998744, 00:08:53.237 "min_latency_us": 24.146724890829695, 00:08:53.237 "max_latency_us": 1366.5257641921398 00:08:53.237 } 00:08:53.237 ], 00:08:53.237 "core_count": 1 00:08:53.237 } 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78412 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78412 ']' 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78412 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78412 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78412' 00:08:53.237 killing process with pid 78412 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78412 00:08:53.237 [2024-11-26 12:52:10.682607] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.237 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78412 00:08:53.237 [2024-11-26 12:52:10.728061] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.497 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:53.497 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DTGPsNbnuV 00:08:53.497 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:53.497 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:53.497 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:53.497 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.497 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:53.497 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:53.497 00:08:53.497 real 0m3.537s 00:08:53.497 user 0m4.334s 00:08:53.497 sys 0m0.667s 00:08:53.497 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.497 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.497 ************************************ 00:08:53.497 END TEST raid_read_error_test 00:08:53.497 ************************************ 00:08:53.497 12:52:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:53.497 12:52:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:53.497 12:52:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.497 12:52:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.757 ************************************ 00:08:53.757 START TEST raid_write_error_test 00:08:53.757 ************************************ 00:08:53.757 12:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:08:53.757 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XaP4FuJYcy 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78547 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78547 00:08:53.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78547 ']' 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.758 12:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.758 [2024-11-26 12:52:11.295507] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:53.758 [2024-11-26 12:52:11.295649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78547 ] 00:08:54.018 [2024-11-26 12:52:11.459825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.018 [2024-11-26 12:52:11.531501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.018 [2024-11-26 12:52:11.608168] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.018 [2024-11-26 12:52:11.608232] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.587 BaseBdev1_malloc 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.587 true 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.587 [2024-11-26 12:52:12.166497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:54.587 [2024-11-26 12:52:12.166631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.587 [2024-11-26 12:52:12.166657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:54.587 [2024-11-26 12:52:12.166667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.587 [2024-11-26 12:52:12.169226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.587 [2024-11-26 12:52:12.169260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:54.587 BaseBdev1 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.587 BaseBdev2_malloc 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.587 true 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.587 [2024-11-26 12:52:12.221872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:54.587 [2024-11-26 12:52:12.221928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.587 [2024-11-26 12:52:12.221951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:54.587 [2024-11-26 12:52:12.221961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.587 [2024-11-26 12:52:12.224406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.587 [2024-11-26 12:52:12.224509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:54.587 BaseBdev2 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.587 BaseBdev3_malloc 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.587 true 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.587 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.847 [2024-11-26 12:52:12.268568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:54.847 [2024-11-26 12:52:12.268620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.847 [2024-11-26 12:52:12.268642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:54.847 [2024-11-26 12:52:12.268652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.847 [2024-11-26 12:52:12.271257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.847 [2024-11-26 12:52:12.271292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:54.847 BaseBdev3 00:08:54.847 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.847 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:54.847 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.847 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.847 [2024-11-26 12:52:12.280628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.847 [2024-11-26 12:52:12.282888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.847 [2024-11-26 12:52:12.282972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.847 [2024-11-26 12:52:12.283167] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:54.847 [2024-11-26 12:52:12.283196] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.847 [2024-11-26 12:52:12.283454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:54.847 [2024-11-26 12:52:12.283593] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:54.847 [2024-11-26 12:52:12.283603] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:54.847 [2024-11-26 12:52:12.283735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.847 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.848 "name": "raid_bdev1", 00:08:54.848 "uuid": "a441d23b-506b-4a3d-b20f-087572ed18ff", 00:08:54.848 "strip_size_kb": 64, 00:08:54.848 "state": "online", 00:08:54.848 "raid_level": "concat", 00:08:54.848 "superblock": true, 00:08:54.848 "num_base_bdevs": 3, 00:08:54.848 "num_base_bdevs_discovered": 3, 00:08:54.848 "num_base_bdevs_operational": 3, 00:08:54.848 "base_bdevs_list": [ 00:08:54.848 { 00:08:54.848 "name": "BaseBdev1", 00:08:54.848 "uuid": "409f3c67-882a-55f7-a327-0644cac7449d", 00:08:54.848 "is_configured": true, 00:08:54.848 "data_offset": 2048, 00:08:54.848 "data_size": 63488 00:08:54.848 }, 00:08:54.848 { 00:08:54.848 "name": "BaseBdev2", 00:08:54.848 "uuid": "45813916-6fba-5a11-8d6a-f055d4fd06fd", 00:08:54.848 "is_configured": true, 00:08:54.848 "data_offset": 2048, 00:08:54.848 "data_size": 63488 00:08:54.848 }, 00:08:54.848 { 00:08:54.848 "name": "BaseBdev3", 00:08:54.848 "uuid": "5af91335-10cf-56e9-bed0-5b9d6cf09a5a", 00:08:54.848 "is_configured": true, 00:08:54.848 "data_offset": 2048, 00:08:54.848 "data_size": 63488 00:08:54.848 } 00:08:54.848 ] 00:08:54.848 }' 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.848 12:52:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.107 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:55.107 12:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:55.367 [2024-11-26 12:52:12.824163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:56.305 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:56.305 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.305 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.305 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.305 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.306 "name": "raid_bdev1", 00:08:56.306 "uuid": "a441d23b-506b-4a3d-b20f-087572ed18ff", 00:08:56.306 "strip_size_kb": 64, 00:08:56.306 "state": "online", 00:08:56.306 "raid_level": "concat", 00:08:56.306 "superblock": true, 00:08:56.306 "num_base_bdevs": 3, 00:08:56.306 "num_base_bdevs_discovered": 3, 00:08:56.306 "num_base_bdevs_operational": 3, 00:08:56.306 "base_bdevs_list": [ 00:08:56.306 { 00:08:56.306 "name": "BaseBdev1", 00:08:56.306 "uuid": "409f3c67-882a-55f7-a327-0644cac7449d", 00:08:56.306 "is_configured": true, 00:08:56.306 "data_offset": 2048, 00:08:56.306 "data_size": 63488 00:08:56.306 }, 00:08:56.306 { 00:08:56.306 "name": "BaseBdev2", 00:08:56.306 "uuid": "45813916-6fba-5a11-8d6a-f055d4fd06fd", 00:08:56.306 "is_configured": true, 00:08:56.306 "data_offset": 2048, 00:08:56.306 "data_size": 63488 00:08:56.306 }, 00:08:56.306 { 00:08:56.306 "name": "BaseBdev3", 00:08:56.306 "uuid": "5af91335-10cf-56e9-bed0-5b9d6cf09a5a", 00:08:56.306 "is_configured": true, 00:08:56.306 "data_offset": 2048, 00:08:56.306 "data_size": 63488 00:08:56.306 } 00:08:56.306 ] 00:08:56.306 }' 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.306 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.565 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:56.565 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.565 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.565 [2024-11-26 12:52:14.212619] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.565 [2024-11-26 12:52:14.212770] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.565 [2024-11-26 12:52:14.215300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.565 [2024-11-26 12:52:14.215418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.565 [2024-11-26 12:52:14.215491] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.565 [2024-11-26 12:52:14.215536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:56.565 { 00:08:56.565 "results": [ 00:08:56.565 { 00:08:56.565 "job": "raid_bdev1", 00:08:56.565 "core_mask": "0x1", 00:08:56.565 "workload": "randrw", 00:08:56.565 "percentage": 50, 00:08:56.565 "status": "finished", 00:08:56.565 "queue_depth": 1, 00:08:56.565 "io_size": 131072, 00:08:56.565 "runtime": 1.389148, 00:08:56.565 "iops": 14798.999098728142, 00:08:56.565 "mibps": 1849.8748873410177, 00:08:56.565 "io_failed": 1, 00:08:56.565 "io_timeout": 0, 00:08:56.565 "avg_latency_us": 94.85303649460462, 00:08:56.565 "min_latency_us": 24.482096069868994, 00:08:56.565 "max_latency_us": 1330.7528384279476 00:08:56.565 } 00:08:56.565 ], 00:08:56.565 "core_count": 1 00:08:56.565 } 00:08:56.565 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.565 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78547 00:08:56.565 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78547 ']' 00:08:56.565 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78547 00:08:56.565 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:56.565 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.565 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78547 00:08:56.825 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.825 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.825 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78547' 00:08:56.825 killing process with pid 78547 00:08:56.825 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78547 00:08:56.825 [2024-11-26 12:52:14.253454] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.825 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78547 00:08:56.825 [2024-11-26 12:52:14.299400] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.085 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XaP4FuJYcy 00:08:57.085 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:57.085 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:57.085 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:57.085 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:57.085 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.085 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.085 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:57.085 00:08:57.085 real 0m3.497s 00:08:57.085 user 0m4.281s 00:08:57.085 sys 0m0.641s 00:08:57.085 ************************************ 00:08:57.085 END TEST raid_write_error_test 00:08:57.085 ************************************ 00:08:57.085 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.085 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.085 12:52:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:57.085 12:52:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:57.085 12:52:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:57.085 12:52:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.085 12:52:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.085 ************************************ 00:08:57.085 START TEST raid_state_function_test 00:08:57.085 ************************************ 00:08:57.085 12:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:08:57.085 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:57.085 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:57.085 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:57.085 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:57.085 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:57.085 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.085 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:57.085 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.085 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78675 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78675' 00:08:57.345 Process raid pid: 78675 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78675 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78675 ']' 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:57.345 12:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.345 [2024-11-26 12:52:14.852545] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:57.345 [2024-11-26 12:52:14.852691] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.345 [2024-11-26 12:52:15.009910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.605 [2024-11-26 12:52:15.083325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.606 [2024-11-26 12:52:15.159400] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.606 [2024-11-26 12:52:15.159447] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 [2024-11-26 12:52:15.674388] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.176 [2024-11-26 12:52:15.674450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.176 [2024-11-26 12:52:15.674470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.176 [2024-11-26 12:52:15.674481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.176 [2024-11-26 12:52:15.674487] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.176 [2024-11-26 12:52:15.674501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.176 "name": "Existed_Raid", 00:08:58.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.176 "strip_size_kb": 0, 00:08:58.176 "state": "configuring", 00:08:58.176 "raid_level": "raid1", 00:08:58.176 "superblock": false, 00:08:58.176 "num_base_bdevs": 3, 00:08:58.176 "num_base_bdevs_discovered": 0, 00:08:58.176 "num_base_bdevs_operational": 3, 00:08:58.176 "base_bdevs_list": [ 00:08:58.176 { 00:08:58.176 "name": "BaseBdev1", 00:08:58.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.176 "is_configured": false, 00:08:58.176 "data_offset": 0, 00:08:58.176 "data_size": 0 00:08:58.176 }, 00:08:58.176 { 00:08:58.176 "name": "BaseBdev2", 00:08:58.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.176 "is_configured": false, 00:08:58.176 "data_offset": 0, 00:08:58.176 "data_size": 0 00:08:58.176 }, 00:08:58.176 { 00:08:58.176 "name": "BaseBdev3", 00:08:58.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.176 "is_configured": false, 00:08:58.176 "data_offset": 0, 00:08:58.176 "data_size": 0 00:08:58.176 } 00:08:58.176 ] 00:08:58.176 }' 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.176 12:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.436 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.436 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.436 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.436 [2024-11-26 12:52:16.101598] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.436 [2024-11-26 12:52:16.101737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:58.436 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.436 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.436 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.436 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.436 [2024-11-26 12:52:16.113589] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.436 [2024-11-26 12:52:16.113685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.436 [2024-11-26 12:52:16.113713] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.436 [2024-11-26 12:52:16.113735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.436 [2024-11-26 12:52:16.113754] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.436 [2024-11-26 12:52:16.113775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.697 [2024-11-26 12:52:16.140876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.697 BaseBdev1 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.697 [ 00:08:58.697 { 00:08:58.697 "name": "BaseBdev1", 00:08:58.697 "aliases": [ 00:08:58.697 "de6dcb9e-0652-4a03-a61f-8637ca703cd8" 00:08:58.697 ], 00:08:58.697 "product_name": "Malloc disk", 00:08:58.697 "block_size": 512, 00:08:58.697 "num_blocks": 65536, 00:08:58.697 "uuid": "de6dcb9e-0652-4a03-a61f-8637ca703cd8", 00:08:58.697 "assigned_rate_limits": { 00:08:58.697 "rw_ios_per_sec": 0, 00:08:58.697 "rw_mbytes_per_sec": 0, 00:08:58.697 "r_mbytes_per_sec": 0, 00:08:58.697 "w_mbytes_per_sec": 0 00:08:58.697 }, 00:08:58.697 "claimed": true, 00:08:58.697 "claim_type": "exclusive_write", 00:08:58.697 "zoned": false, 00:08:58.697 "supported_io_types": { 00:08:58.697 "read": true, 00:08:58.697 "write": true, 00:08:58.697 "unmap": true, 00:08:58.697 "flush": true, 00:08:58.697 "reset": true, 00:08:58.697 "nvme_admin": false, 00:08:58.697 "nvme_io": false, 00:08:58.697 "nvme_io_md": false, 00:08:58.697 "write_zeroes": true, 00:08:58.697 "zcopy": true, 00:08:58.697 "get_zone_info": false, 00:08:58.697 "zone_management": false, 00:08:58.697 "zone_append": false, 00:08:58.697 "compare": false, 00:08:58.697 "compare_and_write": false, 00:08:58.697 "abort": true, 00:08:58.697 "seek_hole": false, 00:08:58.697 "seek_data": false, 00:08:58.697 "copy": true, 00:08:58.697 "nvme_iov_md": false 00:08:58.697 }, 00:08:58.697 "memory_domains": [ 00:08:58.697 { 00:08:58.697 "dma_device_id": "system", 00:08:58.697 "dma_device_type": 1 00:08:58.697 }, 00:08:58.697 { 00:08:58.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.697 "dma_device_type": 2 00:08:58.697 } 00:08:58.697 ], 00:08:58.697 "driver_specific": {} 00:08:58.697 } 00:08:58.697 ] 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.697 "name": "Existed_Raid", 00:08:58.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.697 "strip_size_kb": 0, 00:08:58.697 "state": "configuring", 00:08:58.697 "raid_level": "raid1", 00:08:58.697 "superblock": false, 00:08:58.697 "num_base_bdevs": 3, 00:08:58.697 "num_base_bdevs_discovered": 1, 00:08:58.697 "num_base_bdevs_operational": 3, 00:08:58.697 "base_bdevs_list": [ 00:08:58.697 { 00:08:58.697 "name": "BaseBdev1", 00:08:58.697 "uuid": "de6dcb9e-0652-4a03-a61f-8637ca703cd8", 00:08:58.697 "is_configured": true, 00:08:58.697 "data_offset": 0, 00:08:58.697 "data_size": 65536 00:08:58.697 }, 00:08:58.697 { 00:08:58.697 "name": "BaseBdev2", 00:08:58.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.697 "is_configured": false, 00:08:58.697 "data_offset": 0, 00:08:58.697 "data_size": 0 00:08:58.697 }, 00:08:58.697 { 00:08:58.697 "name": "BaseBdev3", 00:08:58.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.697 "is_configured": false, 00:08:58.697 "data_offset": 0, 00:08:58.697 "data_size": 0 00:08:58.697 } 00:08:58.697 ] 00:08:58.697 }' 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.697 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.957 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.957 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.957 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.957 [2024-11-26 12:52:16.628078] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.957 [2024-11-26 12:52:16.628147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:58.957 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.957 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.957 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.957 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.217 [2024-11-26 12:52:16.640087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.217 [2024-11-26 12:52:16.642262] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.217 [2024-11-26 12:52:16.642335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.217 [2024-11-26 12:52:16.642363] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.217 [2024-11-26 12:52:16.642386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.217 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.217 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:59.217 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.217 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.218 "name": "Existed_Raid", 00:08:59.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.218 "strip_size_kb": 0, 00:08:59.218 "state": "configuring", 00:08:59.218 "raid_level": "raid1", 00:08:59.218 "superblock": false, 00:08:59.218 "num_base_bdevs": 3, 00:08:59.218 "num_base_bdevs_discovered": 1, 00:08:59.218 "num_base_bdevs_operational": 3, 00:08:59.218 "base_bdevs_list": [ 00:08:59.218 { 00:08:59.218 "name": "BaseBdev1", 00:08:59.218 "uuid": "de6dcb9e-0652-4a03-a61f-8637ca703cd8", 00:08:59.218 "is_configured": true, 00:08:59.218 "data_offset": 0, 00:08:59.218 "data_size": 65536 00:08:59.218 }, 00:08:59.218 { 00:08:59.218 "name": "BaseBdev2", 00:08:59.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.218 "is_configured": false, 00:08:59.218 "data_offset": 0, 00:08:59.218 "data_size": 0 00:08:59.218 }, 00:08:59.218 { 00:08:59.218 "name": "BaseBdev3", 00:08:59.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.218 "is_configured": false, 00:08:59.218 "data_offset": 0, 00:08:59.218 "data_size": 0 00:08:59.218 } 00:08:59.218 ] 00:08:59.218 }' 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.218 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.478 12:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.478 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.478 12:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.478 [2024-11-26 12:52:17.031557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.478 BaseBdev2 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.478 [ 00:08:59.478 { 00:08:59.478 "name": "BaseBdev2", 00:08:59.478 "aliases": [ 00:08:59.478 "bedf58b0-0d13-4045-b862-ebae4a035f20" 00:08:59.478 ], 00:08:59.478 "product_name": "Malloc disk", 00:08:59.478 "block_size": 512, 00:08:59.478 "num_blocks": 65536, 00:08:59.478 "uuid": "bedf58b0-0d13-4045-b862-ebae4a035f20", 00:08:59.478 "assigned_rate_limits": { 00:08:59.478 "rw_ios_per_sec": 0, 00:08:59.478 "rw_mbytes_per_sec": 0, 00:08:59.478 "r_mbytes_per_sec": 0, 00:08:59.478 "w_mbytes_per_sec": 0 00:08:59.478 }, 00:08:59.478 "claimed": true, 00:08:59.478 "claim_type": "exclusive_write", 00:08:59.478 "zoned": false, 00:08:59.478 "supported_io_types": { 00:08:59.478 "read": true, 00:08:59.478 "write": true, 00:08:59.478 "unmap": true, 00:08:59.478 "flush": true, 00:08:59.478 "reset": true, 00:08:59.478 "nvme_admin": false, 00:08:59.478 "nvme_io": false, 00:08:59.478 "nvme_io_md": false, 00:08:59.478 "write_zeroes": true, 00:08:59.478 "zcopy": true, 00:08:59.478 "get_zone_info": false, 00:08:59.478 "zone_management": false, 00:08:59.478 "zone_append": false, 00:08:59.478 "compare": false, 00:08:59.478 "compare_and_write": false, 00:08:59.478 "abort": true, 00:08:59.478 "seek_hole": false, 00:08:59.478 "seek_data": false, 00:08:59.478 "copy": true, 00:08:59.478 "nvme_iov_md": false 00:08:59.478 }, 00:08:59.478 "memory_domains": [ 00:08:59.478 { 00:08:59.478 "dma_device_id": "system", 00:08:59.478 "dma_device_type": 1 00:08:59.478 }, 00:08:59.478 { 00:08:59.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.478 "dma_device_type": 2 00:08:59.478 } 00:08:59.478 ], 00:08:59.478 "driver_specific": {} 00:08:59.478 } 00:08:59.478 ] 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.478 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.478 "name": "Existed_Raid", 00:08:59.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.478 "strip_size_kb": 0, 00:08:59.478 "state": "configuring", 00:08:59.478 "raid_level": "raid1", 00:08:59.478 "superblock": false, 00:08:59.478 "num_base_bdevs": 3, 00:08:59.478 "num_base_bdevs_discovered": 2, 00:08:59.478 "num_base_bdevs_operational": 3, 00:08:59.478 "base_bdevs_list": [ 00:08:59.478 { 00:08:59.478 "name": "BaseBdev1", 00:08:59.478 "uuid": "de6dcb9e-0652-4a03-a61f-8637ca703cd8", 00:08:59.478 "is_configured": true, 00:08:59.478 "data_offset": 0, 00:08:59.478 "data_size": 65536 00:08:59.478 }, 00:08:59.478 { 00:08:59.478 "name": "BaseBdev2", 00:08:59.478 "uuid": "bedf58b0-0d13-4045-b862-ebae4a035f20", 00:08:59.478 "is_configured": true, 00:08:59.478 "data_offset": 0, 00:08:59.478 "data_size": 65536 00:08:59.479 }, 00:08:59.479 { 00:08:59.479 "name": "BaseBdev3", 00:08:59.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.479 "is_configured": false, 00:08:59.479 "data_offset": 0, 00:08:59.479 "data_size": 0 00:08:59.479 } 00:08:59.479 ] 00:08:59.479 }' 00:08:59.479 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.479 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.049 [2024-11-26 12:52:17.507735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.049 [2024-11-26 12:52:17.507786] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:00.049 [2024-11-26 12:52:17.507806] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:00.049 [2024-11-26 12:52:17.508103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:00.049 [2024-11-26 12:52:17.508312] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:00.049 [2024-11-26 12:52:17.508335] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:00.049 [2024-11-26 12:52:17.508564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.049 BaseBdev3 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.049 [ 00:09:00.049 { 00:09:00.049 "name": "BaseBdev3", 00:09:00.049 "aliases": [ 00:09:00.049 "a7eeb164-f33f-42b2-8830-9cda7b8fcf1a" 00:09:00.049 ], 00:09:00.049 "product_name": "Malloc disk", 00:09:00.049 "block_size": 512, 00:09:00.049 "num_blocks": 65536, 00:09:00.049 "uuid": "a7eeb164-f33f-42b2-8830-9cda7b8fcf1a", 00:09:00.049 "assigned_rate_limits": { 00:09:00.049 "rw_ios_per_sec": 0, 00:09:00.049 "rw_mbytes_per_sec": 0, 00:09:00.049 "r_mbytes_per_sec": 0, 00:09:00.049 "w_mbytes_per_sec": 0 00:09:00.049 }, 00:09:00.049 "claimed": true, 00:09:00.049 "claim_type": "exclusive_write", 00:09:00.049 "zoned": false, 00:09:00.049 "supported_io_types": { 00:09:00.049 "read": true, 00:09:00.049 "write": true, 00:09:00.049 "unmap": true, 00:09:00.049 "flush": true, 00:09:00.049 "reset": true, 00:09:00.049 "nvme_admin": false, 00:09:00.049 "nvme_io": false, 00:09:00.049 "nvme_io_md": false, 00:09:00.049 "write_zeroes": true, 00:09:00.049 "zcopy": true, 00:09:00.049 "get_zone_info": false, 00:09:00.049 "zone_management": false, 00:09:00.049 "zone_append": false, 00:09:00.049 "compare": false, 00:09:00.049 "compare_and_write": false, 00:09:00.049 "abort": true, 00:09:00.049 "seek_hole": false, 00:09:00.049 "seek_data": false, 00:09:00.049 "copy": true, 00:09:00.049 "nvme_iov_md": false 00:09:00.049 }, 00:09:00.049 "memory_domains": [ 00:09:00.049 { 00:09:00.049 "dma_device_id": "system", 00:09:00.049 "dma_device_type": 1 00:09:00.049 }, 00:09:00.049 { 00:09:00.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.049 "dma_device_type": 2 00:09:00.049 } 00:09:00.049 ], 00:09:00.049 "driver_specific": {} 00:09:00.049 } 00:09:00.049 ] 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.049 "name": "Existed_Raid", 00:09:00.049 "uuid": "2fdff77c-852f-44dd-83ee-eb5080e3b032", 00:09:00.049 "strip_size_kb": 0, 00:09:00.049 "state": "online", 00:09:00.049 "raid_level": "raid1", 00:09:00.049 "superblock": false, 00:09:00.049 "num_base_bdevs": 3, 00:09:00.049 "num_base_bdevs_discovered": 3, 00:09:00.049 "num_base_bdevs_operational": 3, 00:09:00.049 "base_bdevs_list": [ 00:09:00.049 { 00:09:00.049 "name": "BaseBdev1", 00:09:00.049 "uuid": "de6dcb9e-0652-4a03-a61f-8637ca703cd8", 00:09:00.049 "is_configured": true, 00:09:00.049 "data_offset": 0, 00:09:00.049 "data_size": 65536 00:09:00.049 }, 00:09:00.049 { 00:09:00.049 "name": "BaseBdev2", 00:09:00.049 "uuid": "bedf58b0-0d13-4045-b862-ebae4a035f20", 00:09:00.049 "is_configured": true, 00:09:00.049 "data_offset": 0, 00:09:00.049 "data_size": 65536 00:09:00.049 }, 00:09:00.049 { 00:09:00.049 "name": "BaseBdev3", 00:09:00.049 "uuid": "a7eeb164-f33f-42b2-8830-9cda7b8fcf1a", 00:09:00.049 "is_configured": true, 00:09:00.049 "data_offset": 0, 00:09:00.049 "data_size": 65536 00:09:00.049 } 00:09:00.049 ] 00:09:00.049 }' 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.049 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.309 [2024-11-26 12:52:17.963308] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.309 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.568 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.568 "name": "Existed_Raid", 00:09:00.568 "aliases": [ 00:09:00.568 "2fdff77c-852f-44dd-83ee-eb5080e3b032" 00:09:00.568 ], 00:09:00.569 "product_name": "Raid Volume", 00:09:00.569 "block_size": 512, 00:09:00.569 "num_blocks": 65536, 00:09:00.569 "uuid": "2fdff77c-852f-44dd-83ee-eb5080e3b032", 00:09:00.569 "assigned_rate_limits": { 00:09:00.569 "rw_ios_per_sec": 0, 00:09:00.569 "rw_mbytes_per_sec": 0, 00:09:00.569 "r_mbytes_per_sec": 0, 00:09:00.569 "w_mbytes_per_sec": 0 00:09:00.569 }, 00:09:00.569 "claimed": false, 00:09:00.569 "zoned": false, 00:09:00.569 "supported_io_types": { 00:09:00.569 "read": true, 00:09:00.569 "write": true, 00:09:00.569 "unmap": false, 00:09:00.569 "flush": false, 00:09:00.569 "reset": true, 00:09:00.569 "nvme_admin": false, 00:09:00.569 "nvme_io": false, 00:09:00.569 "nvme_io_md": false, 00:09:00.569 "write_zeroes": true, 00:09:00.569 "zcopy": false, 00:09:00.569 "get_zone_info": false, 00:09:00.569 "zone_management": false, 00:09:00.569 "zone_append": false, 00:09:00.569 "compare": false, 00:09:00.569 "compare_and_write": false, 00:09:00.569 "abort": false, 00:09:00.569 "seek_hole": false, 00:09:00.569 "seek_data": false, 00:09:00.569 "copy": false, 00:09:00.569 "nvme_iov_md": false 00:09:00.569 }, 00:09:00.569 "memory_domains": [ 00:09:00.569 { 00:09:00.569 "dma_device_id": "system", 00:09:00.569 "dma_device_type": 1 00:09:00.569 }, 00:09:00.569 { 00:09:00.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.569 "dma_device_type": 2 00:09:00.569 }, 00:09:00.569 { 00:09:00.569 "dma_device_id": "system", 00:09:00.569 "dma_device_type": 1 00:09:00.569 }, 00:09:00.569 { 00:09:00.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.569 "dma_device_type": 2 00:09:00.569 }, 00:09:00.569 { 00:09:00.569 "dma_device_id": "system", 00:09:00.569 "dma_device_type": 1 00:09:00.569 }, 00:09:00.569 { 00:09:00.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.569 "dma_device_type": 2 00:09:00.569 } 00:09:00.569 ], 00:09:00.569 "driver_specific": { 00:09:00.569 "raid": { 00:09:00.569 "uuid": "2fdff77c-852f-44dd-83ee-eb5080e3b032", 00:09:00.569 "strip_size_kb": 0, 00:09:00.569 "state": "online", 00:09:00.569 "raid_level": "raid1", 00:09:00.569 "superblock": false, 00:09:00.569 "num_base_bdevs": 3, 00:09:00.569 "num_base_bdevs_discovered": 3, 00:09:00.569 "num_base_bdevs_operational": 3, 00:09:00.569 "base_bdevs_list": [ 00:09:00.569 { 00:09:00.569 "name": "BaseBdev1", 00:09:00.569 "uuid": "de6dcb9e-0652-4a03-a61f-8637ca703cd8", 00:09:00.569 "is_configured": true, 00:09:00.569 "data_offset": 0, 00:09:00.569 "data_size": 65536 00:09:00.569 }, 00:09:00.569 { 00:09:00.569 "name": "BaseBdev2", 00:09:00.569 "uuid": "bedf58b0-0d13-4045-b862-ebae4a035f20", 00:09:00.569 "is_configured": true, 00:09:00.569 "data_offset": 0, 00:09:00.569 "data_size": 65536 00:09:00.569 }, 00:09:00.569 { 00:09:00.569 "name": "BaseBdev3", 00:09:00.569 "uuid": "a7eeb164-f33f-42b2-8830-9cda7b8fcf1a", 00:09:00.569 "is_configured": true, 00:09:00.569 "data_offset": 0, 00:09:00.569 "data_size": 65536 00:09:00.569 } 00:09:00.569 ] 00:09:00.569 } 00:09:00.569 } 00:09:00.569 }' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:00.569 BaseBdev2 00:09:00.569 BaseBdev3' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.569 [2024-11-26 12:52:18.218700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.569 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.829 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.829 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.829 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.829 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.829 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.829 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.829 "name": "Existed_Raid", 00:09:00.829 "uuid": "2fdff77c-852f-44dd-83ee-eb5080e3b032", 00:09:00.829 "strip_size_kb": 0, 00:09:00.829 "state": "online", 00:09:00.829 "raid_level": "raid1", 00:09:00.829 "superblock": false, 00:09:00.829 "num_base_bdevs": 3, 00:09:00.829 "num_base_bdevs_discovered": 2, 00:09:00.829 "num_base_bdevs_operational": 2, 00:09:00.829 "base_bdevs_list": [ 00:09:00.829 { 00:09:00.829 "name": null, 00:09:00.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.829 "is_configured": false, 00:09:00.829 "data_offset": 0, 00:09:00.829 "data_size": 65536 00:09:00.829 }, 00:09:00.829 { 00:09:00.829 "name": "BaseBdev2", 00:09:00.829 "uuid": "bedf58b0-0d13-4045-b862-ebae4a035f20", 00:09:00.829 "is_configured": true, 00:09:00.829 "data_offset": 0, 00:09:00.829 "data_size": 65536 00:09:00.829 }, 00:09:00.829 { 00:09:00.829 "name": "BaseBdev3", 00:09:00.829 "uuid": "a7eeb164-f33f-42b2-8830-9cda7b8fcf1a", 00:09:00.829 "is_configured": true, 00:09:00.829 "data_offset": 0, 00:09:00.829 "data_size": 65536 00:09:00.829 } 00:09:00.829 ] 00:09:00.829 }' 00:09:00.829 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.829 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.089 [2024-11-26 12:52:18.742867] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.089 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.349 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.349 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.349 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.349 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.349 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.349 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.349 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.350 [2024-11-26 12:52:18.822911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:01.350 [2024-11-26 12:52:18.823012] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.350 [2024-11-26 12:52:18.844137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.350 [2024-11-26 12:52:18.844263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.350 [2024-11-26 12:52:18.844313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.350 BaseBdev2 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.350 [ 00:09:01.350 { 00:09:01.350 "name": "BaseBdev2", 00:09:01.350 "aliases": [ 00:09:01.350 "9ea44fe2-00b1-48e7-a323-50e06714cd2c" 00:09:01.350 ], 00:09:01.350 "product_name": "Malloc disk", 00:09:01.350 "block_size": 512, 00:09:01.350 "num_blocks": 65536, 00:09:01.350 "uuid": "9ea44fe2-00b1-48e7-a323-50e06714cd2c", 00:09:01.350 "assigned_rate_limits": { 00:09:01.350 "rw_ios_per_sec": 0, 00:09:01.350 "rw_mbytes_per_sec": 0, 00:09:01.350 "r_mbytes_per_sec": 0, 00:09:01.350 "w_mbytes_per_sec": 0 00:09:01.350 }, 00:09:01.350 "claimed": false, 00:09:01.350 "zoned": false, 00:09:01.350 "supported_io_types": { 00:09:01.350 "read": true, 00:09:01.350 "write": true, 00:09:01.350 "unmap": true, 00:09:01.350 "flush": true, 00:09:01.350 "reset": true, 00:09:01.350 "nvme_admin": false, 00:09:01.350 "nvme_io": false, 00:09:01.350 "nvme_io_md": false, 00:09:01.350 "write_zeroes": true, 00:09:01.350 "zcopy": true, 00:09:01.350 "get_zone_info": false, 00:09:01.350 "zone_management": false, 00:09:01.350 "zone_append": false, 00:09:01.350 "compare": false, 00:09:01.350 "compare_and_write": false, 00:09:01.350 "abort": true, 00:09:01.350 "seek_hole": false, 00:09:01.350 "seek_data": false, 00:09:01.350 "copy": true, 00:09:01.350 "nvme_iov_md": false 00:09:01.350 }, 00:09:01.350 "memory_domains": [ 00:09:01.350 { 00:09:01.350 "dma_device_id": "system", 00:09:01.350 "dma_device_type": 1 00:09:01.350 }, 00:09:01.350 { 00:09:01.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.350 "dma_device_type": 2 00:09:01.350 } 00:09:01.350 ], 00:09:01.350 "driver_specific": {} 00:09:01.350 } 00:09:01.350 ] 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.350 BaseBdev3 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.350 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.350 [ 00:09:01.350 { 00:09:01.350 "name": "BaseBdev3", 00:09:01.350 "aliases": [ 00:09:01.350 "064fd39b-25c0-4dbc-8115-a5aa9b79b953" 00:09:01.350 ], 00:09:01.350 "product_name": "Malloc disk", 00:09:01.350 "block_size": 512, 00:09:01.350 "num_blocks": 65536, 00:09:01.350 "uuid": "064fd39b-25c0-4dbc-8115-a5aa9b79b953", 00:09:01.350 "assigned_rate_limits": { 00:09:01.350 "rw_ios_per_sec": 0, 00:09:01.350 "rw_mbytes_per_sec": 0, 00:09:01.350 "r_mbytes_per_sec": 0, 00:09:01.350 "w_mbytes_per_sec": 0 00:09:01.350 }, 00:09:01.350 "claimed": false, 00:09:01.350 "zoned": false, 00:09:01.350 "supported_io_types": { 00:09:01.350 "read": true, 00:09:01.350 "write": true, 00:09:01.350 "unmap": true, 00:09:01.350 "flush": true, 00:09:01.350 "reset": true, 00:09:01.350 "nvme_admin": false, 00:09:01.350 "nvme_io": false, 00:09:01.350 "nvme_io_md": false, 00:09:01.350 "write_zeroes": true, 00:09:01.350 "zcopy": true, 00:09:01.350 "get_zone_info": false, 00:09:01.350 "zone_management": false, 00:09:01.350 "zone_append": false, 00:09:01.350 "compare": false, 00:09:01.350 "compare_and_write": false, 00:09:01.350 "abort": true, 00:09:01.350 "seek_hole": false, 00:09:01.350 "seek_data": false, 00:09:01.350 "copy": true, 00:09:01.350 "nvme_iov_md": false 00:09:01.350 }, 00:09:01.350 "memory_domains": [ 00:09:01.350 { 00:09:01.350 "dma_device_id": "system", 00:09:01.350 "dma_device_type": 1 00:09:01.350 }, 00:09:01.350 { 00:09:01.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.350 "dma_device_type": 2 00:09:01.350 } 00:09:01.350 ], 00:09:01.350 "driver_specific": {} 00:09:01.350 } 00:09:01.350 ] 00:09:01.350 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.351 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.351 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.351 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.351 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.351 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.351 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.351 [2024-11-26 12:52:19.022106] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.351 [2024-11-26 12:52:19.022248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.351 [2024-11-26 12:52:19.022289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.351 [2024-11-26 12:52:19.024454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.610 "name": "Existed_Raid", 00:09:01.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.610 "strip_size_kb": 0, 00:09:01.610 "state": "configuring", 00:09:01.610 "raid_level": "raid1", 00:09:01.610 "superblock": false, 00:09:01.610 "num_base_bdevs": 3, 00:09:01.610 "num_base_bdevs_discovered": 2, 00:09:01.610 "num_base_bdevs_operational": 3, 00:09:01.610 "base_bdevs_list": [ 00:09:01.610 { 00:09:01.610 "name": "BaseBdev1", 00:09:01.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.610 "is_configured": false, 00:09:01.610 "data_offset": 0, 00:09:01.610 "data_size": 0 00:09:01.610 }, 00:09:01.610 { 00:09:01.610 "name": "BaseBdev2", 00:09:01.610 "uuid": "9ea44fe2-00b1-48e7-a323-50e06714cd2c", 00:09:01.610 "is_configured": true, 00:09:01.610 "data_offset": 0, 00:09:01.610 "data_size": 65536 00:09:01.610 }, 00:09:01.610 { 00:09:01.610 "name": "BaseBdev3", 00:09:01.610 "uuid": "064fd39b-25c0-4dbc-8115-a5aa9b79b953", 00:09:01.610 "is_configured": true, 00:09:01.610 "data_offset": 0, 00:09:01.610 "data_size": 65536 00:09:01.610 } 00:09:01.610 ] 00:09:01.610 }' 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.610 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.870 [2024-11-26 12:52:19.421427] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.870 "name": "Existed_Raid", 00:09:01.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.870 "strip_size_kb": 0, 00:09:01.870 "state": "configuring", 00:09:01.870 "raid_level": "raid1", 00:09:01.870 "superblock": false, 00:09:01.870 "num_base_bdevs": 3, 00:09:01.870 "num_base_bdevs_discovered": 1, 00:09:01.870 "num_base_bdevs_operational": 3, 00:09:01.870 "base_bdevs_list": [ 00:09:01.870 { 00:09:01.870 "name": "BaseBdev1", 00:09:01.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.870 "is_configured": false, 00:09:01.870 "data_offset": 0, 00:09:01.870 "data_size": 0 00:09:01.870 }, 00:09:01.870 { 00:09:01.870 "name": null, 00:09:01.870 "uuid": "9ea44fe2-00b1-48e7-a323-50e06714cd2c", 00:09:01.870 "is_configured": false, 00:09:01.870 "data_offset": 0, 00:09:01.870 "data_size": 65536 00:09:01.870 }, 00:09:01.870 { 00:09:01.870 "name": "BaseBdev3", 00:09:01.870 "uuid": "064fd39b-25c0-4dbc-8115-a5aa9b79b953", 00:09:01.870 "is_configured": true, 00:09:01.870 "data_offset": 0, 00:09:01.870 "data_size": 65536 00:09:01.870 } 00:09:01.870 ] 00:09:01.870 }' 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.870 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.439 [2024-11-26 12:52:19.921272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.439 BaseBdev1 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.439 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.439 [ 00:09:02.439 { 00:09:02.439 "name": "BaseBdev1", 00:09:02.439 "aliases": [ 00:09:02.439 "50fe4c5b-4374-4e94-8eaf-012f126876e0" 00:09:02.439 ], 00:09:02.439 "product_name": "Malloc disk", 00:09:02.439 "block_size": 512, 00:09:02.439 "num_blocks": 65536, 00:09:02.439 "uuid": "50fe4c5b-4374-4e94-8eaf-012f126876e0", 00:09:02.439 "assigned_rate_limits": { 00:09:02.439 "rw_ios_per_sec": 0, 00:09:02.439 "rw_mbytes_per_sec": 0, 00:09:02.439 "r_mbytes_per_sec": 0, 00:09:02.439 "w_mbytes_per_sec": 0 00:09:02.439 }, 00:09:02.439 "claimed": true, 00:09:02.439 "claim_type": "exclusive_write", 00:09:02.439 "zoned": false, 00:09:02.440 "supported_io_types": { 00:09:02.440 "read": true, 00:09:02.440 "write": true, 00:09:02.440 "unmap": true, 00:09:02.440 "flush": true, 00:09:02.440 "reset": true, 00:09:02.440 "nvme_admin": false, 00:09:02.440 "nvme_io": false, 00:09:02.440 "nvme_io_md": false, 00:09:02.440 "write_zeroes": true, 00:09:02.440 "zcopy": true, 00:09:02.440 "get_zone_info": false, 00:09:02.440 "zone_management": false, 00:09:02.440 "zone_append": false, 00:09:02.440 "compare": false, 00:09:02.440 "compare_and_write": false, 00:09:02.440 "abort": true, 00:09:02.440 "seek_hole": false, 00:09:02.440 "seek_data": false, 00:09:02.440 "copy": true, 00:09:02.440 "nvme_iov_md": false 00:09:02.440 }, 00:09:02.440 "memory_domains": [ 00:09:02.440 { 00:09:02.440 "dma_device_id": "system", 00:09:02.440 "dma_device_type": 1 00:09:02.440 }, 00:09:02.440 { 00:09:02.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.440 "dma_device_type": 2 00:09:02.440 } 00:09:02.440 ], 00:09:02.440 "driver_specific": {} 00:09:02.440 } 00:09:02.440 ] 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.440 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.440 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.440 "name": "Existed_Raid", 00:09:02.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.440 "strip_size_kb": 0, 00:09:02.440 "state": "configuring", 00:09:02.440 "raid_level": "raid1", 00:09:02.440 "superblock": false, 00:09:02.440 "num_base_bdevs": 3, 00:09:02.440 "num_base_bdevs_discovered": 2, 00:09:02.440 "num_base_bdevs_operational": 3, 00:09:02.440 "base_bdevs_list": [ 00:09:02.440 { 00:09:02.440 "name": "BaseBdev1", 00:09:02.440 "uuid": "50fe4c5b-4374-4e94-8eaf-012f126876e0", 00:09:02.440 "is_configured": true, 00:09:02.440 "data_offset": 0, 00:09:02.440 "data_size": 65536 00:09:02.440 }, 00:09:02.440 { 00:09:02.440 "name": null, 00:09:02.440 "uuid": "9ea44fe2-00b1-48e7-a323-50e06714cd2c", 00:09:02.440 "is_configured": false, 00:09:02.440 "data_offset": 0, 00:09:02.440 "data_size": 65536 00:09:02.440 }, 00:09:02.440 { 00:09:02.440 "name": "BaseBdev3", 00:09:02.440 "uuid": "064fd39b-25c0-4dbc-8115-a5aa9b79b953", 00:09:02.440 "is_configured": true, 00:09:02.440 "data_offset": 0, 00:09:02.440 "data_size": 65536 00:09:02.440 } 00:09:02.440 ] 00:09:02.440 }' 00:09:02.440 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.440 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.700 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.700 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.700 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.700 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.700 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.959 [2024-11-26 12:52:20.400470] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.959 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.960 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.960 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.960 "name": "Existed_Raid", 00:09:02.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.960 "strip_size_kb": 0, 00:09:02.960 "state": "configuring", 00:09:02.960 "raid_level": "raid1", 00:09:02.960 "superblock": false, 00:09:02.960 "num_base_bdevs": 3, 00:09:02.960 "num_base_bdevs_discovered": 1, 00:09:02.960 "num_base_bdevs_operational": 3, 00:09:02.960 "base_bdevs_list": [ 00:09:02.960 { 00:09:02.960 "name": "BaseBdev1", 00:09:02.960 "uuid": "50fe4c5b-4374-4e94-8eaf-012f126876e0", 00:09:02.960 "is_configured": true, 00:09:02.960 "data_offset": 0, 00:09:02.960 "data_size": 65536 00:09:02.960 }, 00:09:02.960 { 00:09:02.960 "name": null, 00:09:02.960 "uuid": "9ea44fe2-00b1-48e7-a323-50e06714cd2c", 00:09:02.960 "is_configured": false, 00:09:02.960 "data_offset": 0, 00:09:02.960 "data_size": 65536 00:09:02.960 }, 00:09:02.960 { 00:09:02.960 "name": null, 00:09:02.960 "uuid": "064fd39b-25c0-4dbc-8115-a5aa9b79b953", 00:09:02.960 "is_configured": false, 00:09:02.960 "data_offset": 0, 00:09:02.960 "data_size": 65536 00:09:02.960 } 00:09:02.960 ] 00:09:02.960 }' 00:09:02.960 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.960 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.219 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.219 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.219 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.219 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.219 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.479 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:03.479 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:03.479 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.479 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.479 [2024-11-26 12:52:20.915622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.479 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.479 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:03.479 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.479 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.480 "name": "Existed_Raid", 00:09:03.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.480 "strip_size_kb": 0, 00:09:03.480 "state": "configuring", 00:09:03.480 "raid_level": "raid1", 00:09:03.480 "superblock": false, 00:09:03.480 "num_base_bdevs": 3, 00:09:03.480 "num_base_bdevs_discovered": 2, 00:09:03.480 "num_base_bdevs_operational": 3, 00:09:03.480 "base_bdevs_list": [ 00:09:03.480 { 00:09:03.480 "name": "BaseBdev1", 00:09:03.480 "uuid": "50fe4c5b-4374-4e94-8eaf-012f126876e0", 00:09:03.480 "is_configured": true, 00:09:03.480 "data_offset": 0, 00:09:03.480 "data_size": 65536 00:09:03.480 }, 00:09:03.480 { 00:09:03.480 "name": null, 00:09:03.480 "uuid": "9ea44fe2-00b1-48e7-a323-50e06714cd2c", 00:09:03.480 "is_configured": false, 00:09:03.480 "data_offset": 0, 00:09:03.480 "data_size": 65536 00:09:03.480 }, 00:09:03.480 { 00:09:03.480 "name": "BaseBdev3", 00:09:03.480 "uuid": "064fd39b-25c0-4dbc-8115-a5aa9b79b953", 00:09:03.480 "is_configured": true, 00:09:03.480 "data_offset": 0, 00:09:03.480 "data_size": 65536 00:09:03.480 } 00:09:03.480 ] 00:09:03.480 }' 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.480 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.740 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.740 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.740 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.740 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.740 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.740 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:03.740 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.740 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.740 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.740 [2024-11-26 12:52:21.403142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.000 "name": "Existed_Raid", 00:09:04.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.000 "strip_size_kb": 0, 00:09:04.000 "state": "configuring", 00:09:04.000 "raid_level": "raid1", 00:09:04.000 "superblock": false, 00:09:04.000 "num_base_bdevs": 3, 00:09:04.000 "num_base_bdevs_discovered": 1, 00:09:04.000 "num_base_bdevs_operational": 3, 00:09:04.000 "base_bdevs_list": [ 00:09:04.000 { 00:09:04.000 "name": null, 00:09:04.000 "uuid": "50fe4c5b-4374-4e94-8eaf-012f126876e0", 00:09:04.000 "is_configured": false, 00:09:04.000 "data_offset": 0, 00:09:04.000 "data_size": 65536 00:09:04.000 }, 00:09:04.000 { 00:09:04.000 "name": null, 00:09:04.000 "uuid": "9ea44fe2-00b1-48e7-a323-50e06714cd2c", 00:09:04.000 "is_configured": false, 00:09:04.000 "data_offset": 0, 00:09:04.000 "data_size": 65536 00:09:04.000 }, 00:09:04.000 { 00:09:04.000 "name": "BaseBdev3", 00:09:04.000 "uuid": "064fd39b-25c0-4dbc-8115-a5aa9b79b953", 00:09:04.000 "is_configured": true, 00:09:04.000 "data_offset": 0, 00:09:04.000 "data_size": 65536 00:09:04.000 } 00:09:04.000 ] 00:09:04.000 }' 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.000 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.259 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.259 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.259 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.259 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.259 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 [2024-11-26 12:52:21.969933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.520 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.520 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.520 "name": "Existed_Raid", 00:09:04.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.520 "strip_size_kb": 0, 00:09:04.520 "state": "configuring", 00:09:04.520 "raid_level": "raid1", 00:09:04.520 "superblock": false, 00:09:04.520 "num_base_bdevs": 3, 00:09:04.520 "num_base_bdevs_discovered": 2, 00:09:04.520 "num_base_bdevs_operational": 3, 00:09:04.520 "base_bdevs_list": [ 00:09:04.520 { 00:09:04.520 "name": null, 00:09:04.520 "uuid": "50fe4c5b-4374-4e94-8eaf-012f126876e0", 00:09:04.520 "is_configured": false, 00:09:04.520 "data_offset": 0, 00:09:04.520 "data_size": 65536 00:09:04.520 }, 00:09:04.520 { 00:09:04.520 "name": "BaseBdev2", 00:09:04.520 "uuid": "9ea44fe2-00b1-48e7-a323-50e06714cd2c", 00:09:04.520 "is_configured": true, 00:09:04.520 "data_offset": 0, 00:09:04.520 "data_size": 65536 00:09:04.520 }, 00:09:04.520 { 00:09:04.520 "name": "BaseBdev3", 00:09:04.520 "uuid": "064fd39b-25c0-4dbc-8115-a5aa9b79b953", 00:09:04.520 "is_configured": true, 00:09:04.520 "data_offset": 0, 00:09:04.520 "data_size": 65536 00:09:04.520 } 00:09:04.520 ] 00:09:04.520 }' 00:09:04.520 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.520 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.781 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.781 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:04.781 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.781 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.781 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 50fe4c5b-4374-4e94-8eaf-012f126876e0 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.042 [2024-11-26 12:52:22.537732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:05.042 [2024-11-26 12:52:22.537789] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:05.042 [2024-11-26 12:52:22.537796] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:05.042 [2024-11-26 12:52:22.538067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:05.042 [2024-11-26 12:52:22.538248] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:05.042 [2024-11-26 12:52:22.538285] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:05.042 [2024-11-26 12:52:22.538493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.042 NewBaseBdev 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.042 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.042 [ 00:09:05.042 { 00:09:05.042 "name": "NewBaseBdev", 00:09:05.042 "aliases": [ 00:09:05.042 "50fe4c5b-4374-4e94-8eaf-012f126876e0" 00:09:05.042 ], 00:09:05.042 "product_name": "Malloc disk", 00:09:05.042 "block_size": 512, 00:09:05.042 "num_blocks": 65536, 00:09:05.042 "uuid": "50fe4c5b-4374-4e94-8eaf-012f126876e0", 00:09:05.042 "assigned_rate_limits": { 00:09:05.042 "rw_ios_per_sec": 0, 00:09:05.042 "rw_mbytes_per_sec": 0, 00:09:05.042 "r_mbytes_per_sec": 0, 00:09:05.042 "w_mbytes_per_sec": 0 00:09:05.042 }, 00:09:05.042 "claimed": true, 00:09:05.042 "claim_type": "exclusive_write", 00:09:05.042 "zoned": false, 00:09:05.042 "supported_io_types": { 00:09:05.042 "read": true, 00:09:05.042 "write": true, 00:09:05.042 "unmap": true, 00:09:05.042 "flush": true, 00:09:05.042 "reset": true, 00:09:05.042 "nvme_admin": false, 00:09:05.042 "nvme_io": false, 00:09:05.042 "nvme_io_md": false, 00:09:05.042 "write_zeroes": true, 00:09:05.042 "zcopy": true, 00:09:05.042 "get_zone_info": false, 00:09:05.042 "zone_management": false, 00:09:05.043 "zone_append": false, 00:09:05.043 "compare": false, 00:09:05.043 "compare_and_write": false, 00:09:05.043 "abort": true, 00:09:05.043 "seek_hole": false, 00:09:05.043 "seek_data": false, 00:09:05.043 "copy": true, 00:09:05.043 "nvme_iov_md": false 00:09:05.043 }, 00:09:05.043 "memory_domains": [ 00:09:05.043 { 00:09:05.043 "dma_device_id": "system", 00:09:05.043 "dma_device_type": 1 00:09:05.043 }, 00:09:05.043 { 00:09:05.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.043 "dma_device_type": 2 00:09:05.043 } 00:09:05.043 ], 00:09:05.043 "driver_specific": {} 00:09:05.043 } 00:09:05.043 ] 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.043 "name": "Existed_Raid", 00:09:05.043 "uuid": "c38a720e-3fd4-4871-b374-72196adae25b", 00:09:05.043 "strip_size_kb": 0, 00:09:05.043 "state": "online", 00:09:05.043 "raid_level": "raid1", 00:09:05.043 "superblock": false, 00:09:05.043 "num_base_bdevs": 3, 00:09:05.043 "num_base_bdevs_discovered": 3, 00:09:05.043 "num_base_bdevs_operational": 3, 00:09:05.043 "base_bdevs_list": [ 00:09:05.043 { 00:09:05.043 "name": "NewBaseBdev", 00:09:05.043 "uuid": "50fe4c5b-4374-4e94-8eaf-012f126876e0", 00:09:05.043 "is_configured": true, 00:09:05.043 "data_offset": 0, 00:09:05.043 "data_size": 65536 00:09:05.043 }, 00:09:05.043 { 00:09:05.043 "name": "BaseBdev2", 00:09:05.043 "uuid": "9ea44fe2-00b1-48e7-a323-50e06714cd2c", 00:09:05.043 "is_configured": true, 00:09:05.043 "data_offset": 0, 00:09:05.043 "data_size": 65536 00:09:05.043 }, 00:09:05.043 { 00:09:05.043 "name": "BaseBdev3", 00:09:05.043 "uuid": "064fd39b-25c0-4dbc-8115-a5aa9b79b953", 00:09:05.043 "is_configured": true, 00:09:05.043 "data_offset": 0, 00:09:05.043 "data_size": 65536 00:09:05.043 } 00:09:05.043 ] 00:09:05.043 }' 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.043 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.619 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.619 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.619 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.620 [2024-11-26 12:52:23.025215] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.620 "name": "Existed_Raid", 00:09:05.620 "aliases": [ 00:09:05.620 "c38a720e-3fd4-4871-b374-72196adae25b" 00:09:05.620 ], 00:09:05.620 "product_name": "Raid Volume", 00:09:05.620 "block_size": 512, 00:09:05.620 "num_blocks": 65536, 00:09:05.620 "uuid": "c38a720e-3fd4-4871-b374-72196adae25b", 00:09:05.620 "assigned_rate_limits": { 00:09:05.620 "rw_ios_per_sec": 0, 00:09:05.620 "rw_mbytes_per_sec": 0, 00:09:05.620 "r_mbytes_per_sec": 0, 00:09:05.620 "w_mbytes_per_sec": 0 00:09:05.620 }, 00:09:05.620 "claimed": false, 00:09:05.620 "zoned": false, 00:09:05.620 "supported_io_types": { 00:09:05.620 "read": true, 00:09:05.620 "write": true, 00:09:05.620 "unmap": false, 00:09:05.620 "flush": false, 00:09:05.620 "reset": true, 00:09:05.620 "nvme_admin": false, 00:09:05.620 "nvme_io": false, 00:09:05.620 "nvme_io_md": false, 00:09:05.620 "write_zeroes": true, 00:09:05.620 "zcopy": false, 00:09:05.620 "get_zone_info": false, 00:09:05.620 "zone_management": false, 00:09:05.620 "zone_append": false, 00:09:05.620 "compare": false, 00:09:05.620 "compare_and_write": false, 00:09:05.620 "abort": false, 00:09:05.620 "seek_hole": false, 00:09:05.620 "seek_data": false, 00:09:05.620 "copy": false, 00:09:05.620 "nvme_iov_md": false 00:09:05.620 }, 00:09:05.620 "memory_domains": [ 00:09:05.620 { 00:09:05.620 "dma_device_id": "system", 00:09:05.620 "dma_device_type": 1 00:09:05.620 }, 00:09:05.620 { 00:09:05.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.620 "dma_device_type": 2 00:09:05.620 }, 00:09:05.620 { 00:09:05.620 "dma_device_id": "system", 00:09:05.620 "dma_device_type": 1 00:09:05.620 }, 00:09:05.620 { 00:09:05.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.620 "dma_device_type": 2 00:09:05.620 }, 00:09:05.620 { 00:09:05.620 "dma_device_id": "system", 00:09:05.620 "dma_device_type": 1 00:09:05.620 }, 00:09:05.620 { 00:09:05.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.620 "dma_device_type": 2 00:09:05.620 } 00:09:05.620 ], 00:09:05.620 "driver_specific": { 00:09:05.620 "raid": { 00:09:05.620 "uuid": "c38a720e-3fd4-4871-b374-72196adae25b", 00:09:05.620 "strip_size_kb": 0, 00:09:05.620 "state": "online", 00:09:05.620 "raid_level": "raid1", 00:09:05.620 "superblock": false, 00:09:05.620 "num_base_bdevs": 3, 00:09:05.620 "num_base_bdevs_discovered": 3, 00:09:05.620 "num_base_bdevs_operational": 3, 00:09:05.620 "base_bdevs_list": [ 00:09:05.620 { 00:09:05.620 "name": "NewBaseBdev", 00:09:05.620 "uuid": "50fe4c5b-4374-4e94-8eaf-012f126876e0", 00:09:05.620 "is_configured": true, 00:09:05.620 "data_offset": 0, 00:09:05.620 "data_size": 65536 00:09:05.620 }, 00:09:05.620 { 00:09:05.620 "name": "BaseBdev2", 00:09:05.620 "uuid": "9ea44fe2-00b1-48e7-a323-50e06714cd2c", 00:09:05.620 "is_configured": true, 00:09:05.620 "data_offset": 0, 00:09:05.620 "data_size": 65536 00:09:05.620 }, 00:09:05.620 { 00:09:05.620 "name": "BaseBdev3", 00:09:05.620 "uuid": "064fd39b-25c0-4dbc-8115-a5aa9b79b953", 00:09:05.620 "is_configured": true, 00:09:05.620 "data_offset": 0, 00:09:05.620 "data_size": 65536 00:09:05.620 } 00:09:05.620 ] 00:09:05.620 } 00:09:05.620 } 00:09:05.620 }' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:05.620 BaseBdev2 00:09:05.620 BaseBdev3' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.620 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.893 [2024-11-26 12:52:23.320436] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.893 [2024-11-26 12:52:23.320518] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.893 [2024-11-26 12:52:23.320602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.893 [2024-11-26 12:52:23.320865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.893 [2024-11-26 12:52:23.320875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78675 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78675 ']' 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78675 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78675 00:09:05.893 killing process with pid 78675 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78675' 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78675 00:09:05.893 [2024-11-26 12:52:23.359738] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.893 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78675 00:09:05.893 [2024-11-26 12:52:23.417138] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.153 ************************************ 00:09:06.153 END TEST raid_state_function_test 00:09:06.153 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:06.153 00:09:06.153 real 0m9.039s 00:09:06.153 user 0m15.130s 00:09:06.153 sys 0m1.938s 00:09:06.153 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.153 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.153 ************************************ 00:09:06.414 12:52:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:06.414 12:52:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:06.414 12:52:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.414 12:52:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.414 ************************************ 00:09:06.414 START TEST raid_state_function_test_sb 00:09:06.414 ************************************ 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79284 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79284' 00:09:06.414 Process raid pid: 79284 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79284 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79284 ']' 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.414 12:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.414 [2024-11-26 12:52:23.968986] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:06.414 [2024-11-26 12:52:23.969207] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.674 [2024-11-26 12:52:24.124624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.674 [2024-11-26 12:52:24.196943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.674 [2024-11-26 12:52:24.273089] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.674 [2024-11-26 12:52:24.273230] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.244 [2024-11-26 12:52:24.789039] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.244 [2024-11-26 12:52:24.789100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.244 [2024-11-26 12:52:24.789113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.244 [2024-11-26 12:52:24.789123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.244 [2024-11-26 12:52:24.789130] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.244 [2024-11-26 12:52:24.789143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.244 "name": "Existed_Raid", 00:09:07.244 "uuid": "14b1bed8-c9ac-4a2e-a24c-acc31a74d14f", 00:09:07.244 "strip_size_kb": 0, 00:09:07.244 "state": "configuring", 00:09:07.244 "raid_level": "raid1", 00:09:07.244 "superblock": true, 00:09:07.244 "num_base_bdevs": 3, 00:09:07.244 "num_base_bdevs_discovered": 0, 00:09:07.244 "num_base_bdevs_operational": 3, 00:09:07.244 "base_bdevs_list": [ 00:09:07.244 { 00:09:07.244 "name": "BaseBdev1", 00:09:07.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.244 "is_configured": false, 00:09:07.244 "data_offset": 0, 00:09:07.244 "data_size": 0 00:09:07.244 }, 00:09:07.244 { 00:09:07.244 "name": "BaseBdev2", 00:09:07.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.244 "is_configured": false, 00:09:07.244 "data_offset": 0, 00:09:07.244 "data_size": 0 00:09:07.244 }, 00:09:07.244 { 00:09:07.244 "name": "BaseBdev3", 00:09:07.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.244 "is_configured": false, 00:09:07.244 "data_offset": 0, 00:09:07.244 "data_size": 0 00:09:07.244 } 00:09:07.244 ] 00:09:07.244 }' 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.244 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.815 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.815 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.815 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.815 [2024-11-26 12:52:25.236164] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.815 [2024-11-26 12:52:25.236311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:07.815 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.815 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.815 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.815 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.815 [2024-11-26 12:52:25.248186] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.815 [2024-11-26 12:52:25.248270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.815 [2024-11-26 12:52:25.248299] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.815 [2024-11-26 12:52:25.248323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.815 [2024-11-26 12:52:25.248341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.815 [2024-11-26 12:52:25.248363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.816 [2024-11-26 12:52:25.275214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.816 BaseBdev1 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.816 [ 00:09:07.816 { 00:09:07.816 "name": "BaseBdev1", 00:09:07.816 "aliases": [ 00:09:07.816 "256d81b3-6098-4986-b911-de74eb4f7f2e" 00:09:07.816 ], 00:09:07.816 "product_name": "Malloc disk", 00:09:07.816 "block_size": 512, 00:09:07.816 "num_blocks": 65536, 00:09:07.816 "uuid": "256d81b3-6098-4986-b911-de74eb4f7f2e", 00:09:07.816 "assigned_rate_limits": { 00:09:07.816 "rw_ios_per_sec": 0, 00:09:07.816 "rw_mbytes_per_sec": 0, 00:09:07.816 "r_mbytes_per_sec": 0, 00:09:07.816 "w_mbytes_per_sec": 0 00:09:07.816 }, 00:09:07.816 "claimed": true, 00:09:07.816 "claim_type": "exclusive_write", 00:09:07.816 "zoned": false, 00:09:07.816 "supported_io_types": { 00:09:07.816 "read": true, 00:09:07.816 "write": true, 00:09:07.816 "unmap": true, 00:09:07.816 "flush": true, 00:09:07.816 "reset": true, 00:09:07.816 "nvme_admin": false, 00:09:07.816 "nvme_io": false, 00:09:07.816 "nvme_io_md": false, 00:09:07.816 "write_zeroes": true, 00:09:07.816 "zcopy": true, 00:09:07.816 "get_zone_info": false, 00:09:07.816 "zone_management": false, 00:09:07.816 "zone_append": false, 00:09:07.816 "compare": false, 00:09:07.816 "compare_and_write": false, 00:09:07.816 "abort": true, 00:09:07.816 "seek_hole": false, 00:09:07.816 "seek_data": false, 00:09:07.816 "copy": true, 00:09:07.816 "nvme_iov_md": false 00:09:07.816 }, 00:09:07.816 "memory_domains": [ 00:09:07.816 { 00:09:07.816 "dma_device_id": "system", 00:09:07.816 "dma_device_type": 1 00:09:07.816 }, 00:09:07.816 { 00:09:07.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.816 "dma_device_type": 2 00:09:07.816 } 00:09:07.816 ], 00:09:07.816 "driver_specific": {} 00:09:07.816 } 00:09:07.816 ] 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.816 "name": "Existed_Raid", 00:09:07.816 "uuid": "d8b1aa9f-d6ff-4bc5-a5d2-e4685ae10637", 00:09:07.816 "strip_size_kb": 0, 00:09:07.816 "state": "configuring", 00:09:07.816 "raid_level": "raid1", 00:09:07.816 "superblock": true, 00:09:07.816 "num_base_bdevs": 3, 00:09:07.816 "num_base_bdevs_discovered": 1, 00:09:07.816 "num_base_bdevs_operational": 3, 00:09:07.816 "base_bdevs_list": [ 00:09:07.816 { 00:09:07.816 "name": "BaseBdev1", 00:09:07.816 "uuid": "256d81b3-6098-4986-b911-de74eb4f7f2e", 00:09:07.816 "is_configured": true, 00:09:07.816 "data_offset": 2048, 00:09:07.816 "data_size": 63488 00:09:07.816 }, 00:09:07.816 { 00:09:07.816 "name": "BaseBdev2", 00:09:07.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.816 "is_configured": false, 00:09:07.816 "data_offset": 0, 00:09:07.816 "data_size": 0 00:09:07.816 }, 00:09:07.816 { 00:09:07.816 "name": "BaseBdev3", 00:09:07.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.816 "is_configured": false, 00:09:07.816 "data_offset": 0, 00:09:07.816 "data_size": 0 00:09:07.816 } 00:09:07.816 ] 00:09:07.816 }' 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.816 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.076 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.076 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.076 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.076 [2024-11-26 12:52:25.750410] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.076 [2024-11-26 12:52:25.750540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.336 [2024-11-26 12:52:25.758438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.336 [2024-11-26 12:52:25.760683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.336 [2024-11-26 12:52:25.760766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.336 [2024-11-26 12:52:25.760795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.336 [2024-11-26 12:52:25.760819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.336 "name": "Existed_Raid", 00:09:08.336 "uuid": "340e09ed-5c93-47f5-905e-89ddbe6972b5", 00:09:08.336 "strip_size_kb": 0, 00:09:08.336 "state": "configuring", 00:09:08.336 "raid_level": "raid1", 00:09:08.336 "superblock": true, 00:09:08.336 "num_base_bdevs": 3, 00:09:08.336 "num_base_bdevs_discovered": 1, 00:09:08.336 "num_base_bdevs_operational": 3, 00:09:08.336 "base_bdevs_list": [ 00:09:08.336 { 00:09:08.336 "name": "BaseBdev1", 00:09:08.336 "uuid": "256d81b3-6098-4986-b911-de74eb4f7f2e", 00:09:08.336 "is_configured": true, 00:09:08.336 "data_offset": 2048, 00:09:08.336 "data_size": 63488 00:09:08.336 }, 00:09:08.336 { 00:09:08.336 "name": "BaseBdev2", 00:09:08.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.336 "is_configured": false, 00:09:08.336 "data_offset": 0, 00:09:08.336 "data_size": 0 00:09:08.336 }, 00:09:08.336 { 00:09:08.336 "name": "BaseBdev3", 00:09:08.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.336 "is_configured": false, 00:09:08.336 "data_offset": 0, 00:09:08.336 "data_size": 0 00:09:08.336 } 00:09:08.336 ] 00:09:08.336 }' 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.336 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.597 [2024-11-26 12:52:26.194457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.597 BaseBdev2 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.597 [ 00:09:08.597 { 00:09:08.597 "name": "BaseBdev2", 00:09:08.597 "aliases": [ 00:09:08.597 "8ccadcb8-b909-4dec-ad3c-fba78810e3d2" 00:09:08.597 ], 00:09:08.597 "product_name": "Malloc disk", 00:09:08.597 "block_size": 512, 00:09:08.597 "num_blocks": 65536, 00:09:08.597 "uuid": "8ccadcb8-b909-4dec-ad3c-fba78810e3d2", 00:09:08.597 "assigned_rate_limits": { 00:09:08.597 "rw_ios_per_sec": 0, 00:09:08.597 "rw_mbytes_per_sec": 0, 00:09:08.597 "r_mbytes_per_sec": 0, 00:09:08.597 "w_mbytes_per_sec": 0 00:09:08.597 }, 00:09:08.597 "claimed": true, 00:09:08.597 "claim_type": "exclusive_write", 00:09:08.597 "zoned": false, 00:09:08.597 "supported_io_types": { 00:09:08.597 "read": true, 00:09:08.597 "write": true, 00:09:08.597 "unmap": true, 00:09:08.597 "flush": true, 00:09:08.597 "reset": true, 00:09:08.597 "nvme_admin": false, 00:09:08.597 "nvme_io": false, 00:09:08.597 "nvme_io_md": false, 00:09:08.597 "write_zeroes": true, 00:09:08.597 "zcopy": true, 00:09:08.597 "get_zone_info": false, 00:09:08.597 "zone_management": false, 00:09:08.597 "zone_append": false, 00:09:08.597 "compare": false, 00:09:08.597 "compare_and_write": false, 00:09:08.597 "abort": true, 00:09:08.597 "seek_hole": false, 00:09:08.597 "seek_data": false, 00:09:08.597 "copy": true, 00:09:08.597 "nvme_iov_md": false 00:09:08.597 }, 00:09:08.597 "memory_domains": [ 00:09:08.597 { 00:09:08.597 "dma_device_id": "system", 00:09:08.597 "dma_device_type": 1 00:09:08.597 }, 00:09:08.597 { 00:09:08.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.597 "dma_device_type": 2 00:09:08.597 } 00:09:08.597 ], 00:09:08.597 "driver_specific": {} 00:09:08.597 } 00:09:08.597 ] 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.597 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.597 "name": "Existed_Raid", 00:09:08.597 "uuid": "340e09ed-5c93-47f5-905e-89ddbe6972b5", 00:09:08.597 "strip_size_kb": 0, 00:09:08.597 "state": "configuring", 00:09:08.597 "raid_level": "raid1", 00:09:08.597 "superblock": true, 00:09:08.597 "num_base_bdevs": 3, 00:09:08.597 "num_base_bdevs_discovered": 2, 00:09:08.597 "num_base_bdevs_operational": 3, 00:09:08.597 "base_bdevs_list": [ 00:09:08.597 { 00:09:08.597 "name": "BaseBdev1", 00:09:08.597 "uuid": "256d81b3-6098-4986-b911-de74eb4f7f2e", 00:09:08.597 "is_configured": true, 00:09:08.597 "data_offset": 2048, 00:09:08.597 "data_size": 63488 00:09:08.597 }, 00:09:08.597 { 00:09:08.597 "name": "BaseBdev2", 00:09:08.597 "uuid": "8ccadcb8-b909-4dec-ad3c-fba78810e3d2", 00:09:08.597 "is_configured": true, 00:09:08.597 "data_offset": 2048, 00:09:08.597 "data_size": 63488 00:09:08.597 }, 00:09:08.597 { 00:09:08.597 "name": "BaseBdev3", 00:09:08.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.597 "is_configured": false, 00:09:08.597 "data_offset": 0, 00:09:08.597 "data_size": 0 00:09:08.597 } 00:09:08.597 ] 00:09:08.597 }' 00:09:08.598 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.598 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.168 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:09.168 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.169 [2024-11-26 12:52:26.650390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.169 [2024-11-26 12:52:26.650612] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:09.169 [2024-11-26 12:52:26.650632] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:09.169 [2024-11-26 12:52:26.650962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:09.169 [2024-11-26 12:52:26.651103] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:09.169 [2024-11-26 12:52:26.651113] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:09.169 BaseBdev3 00:09:09.169 [2024-11-26 12:52:26.651287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.169 [ 00:09:09.169 { 00:09:09.169 "name": "BaseBdev3", 00:09:09.169 "aliases": [ 00:09:09.169 "918a67c4-f930-414d-bbb6-d1e0d112eb99" 00:09:09.169 ], 00:09:09.169 "product_name": "Malloc disk", 00:09:09.169 "block_size": 512, 00:09:09.169 "num_blocks": 65536, 00:09:09.169 "uuid": "918a67c4-f930-414d-bbb6-d1e0d112eb99", 00:09:09.169 "assigned_rate_limits": { 00:09:09.169 "rw_ios_per_sec": 0, 00:09:09.169 "rw_mbytes_per_sec": 0, 00:09:09.169 "r_mbytes_per_sec": 0, 00:09:09.169 "w_mbytes_per_sec": 0 00:09:09.169 }, 00:09:09.169 "claimed": true, 00:09:09.169 "claim_type": "exclusive_write", 00:09:09.169 "zoned": false, 00:09:09.169 "supported_io_types": { 00:09:09.169 "read": true, 00:09:09.169 "write": true, 00:09:09.169 "unmap": true, 00:09:09.169 "flush": true, 00:09:09.169 "reset": true, 00:09:09.169 "nvme_admin": false, 00:09:09.169 "nvme_io": false, 00:09:09.169 "nvme_io_md": false, 00:09:09.169 "write_zeroes": true, 00:09:09.169 "zcopy": true, 00:09:09.169 "get_zone_info": false, 00:09:09.169 "zone_management": false, 00:09:09.169 "zone_append": false, 00:09:09.169 "compare": false, 00:09:09.169 "compare_and_write": false, 00:09:09.169 "abort": true, 00:09:09.169 "seek_hole": false, 00:09:09.169 "seek_data": false, 00:09:09.169 "copy": true, 00:09:09.169 "nvme_iov_md": false 00:09:09.169 }, 00:09:09.169 "memory_domains": [ 00:09:09.169 { 00:09:09.169 "dma_device_id": "system", 00:09:09.169 "dma_device_type": 1 00:09:09.169 }, 00:09:09.169 { 00:09:09.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.169 "dma_device_type": 2 00:09:09.169 } 00:09:09.169 ], 00:09:09.169 "driver_specific": {} 00:09:09.169 } 00:09:09.169 ] 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.169 "name": "Existed_Raid", 00:09:09.169 "uuid": "340e09ed-5c93-47f5-905e-89ddbe6972b5", 00:09:09.169 "strip_size_kb": 0, 00:09:09.169 "state": "online", 00:09:09.169 "raid_level": "raid1", 00:09:09.169 "superblock": true, 00:09:09.169 "num_base_bdevs": 3, 00:09:09.169 "num_base_bdevs_discovered": 3, 00:09:09.169 "num_base_bdevs_operational": 3, 00:09:09.169 "base_bdevs_list": [ 00:09:09.169 { 00:09:09.169 "name": "BaseBdev1", 00:09:09.169 "uuid": "256d81b3-6098-4986-b911-de74eb4f7f2e", 00:09:09.169 "is_configured": true, 00:09:09.169 "data_offset": 2048, 00:09:09.169 "data_size": 63488 00:09:09.169 }, 00:09:09.169 { 00:09:09.169 "name": "BaseBdev2", 00:09:09.169 "uuid": "8ccadcb8-b909-4dec-ad3c-fba78810e3d2", 00:09:09.169 "is_configured": true, 00:09:09.169 "data_offset": 2048, 00:09:09.169 "data_size": 63488 00:09:09.169 }, 00:09:09.169 { 00:09:09.169 "name": "BaseBdev3", 00:09:09.169 "uuid": "918a67c4-f930-414d-bbb6-d1e0d112eb99", 00:09:09.169 "is_configured": true, 00:09:09.169 "data_offset": 2048, 00:09:09.169 "data_size": 63488 00:09:09.169 } 00:09:09.169 ] 00:09:09.169 }' 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.169 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.740 [2024-11-26 12:52:27.153879] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.740 "name": "Existed_Raid", 00:09:09.740 "aliases": [ 00:09:09.740 "340e09ed-5c93-47f5-905e-89ddbe6972b5" 00:09:09.740 ], 00:09:09.740 "product_name": "Raid Volume", 00:09:09.740 "block_size": 512, 00:09:09.740 "num_blocks": 63488, 00:09:09.740 "uuid": "340e09ed-5c93-47f5-905e-89ddbe6972b5", 00:09:09.740 "assigned_rate_limits": { 00:09:09.740 "rw_ios_per_sec": 0, 00:09:09.740 "rw_mbytes_per_sec": 0, 00:09:09.740 "r_mbytes_per_sec": 0, 00:09:09.740 "w_mbytes_per_sec": 0 00:09:09.740 }, 00:09:09.740 "claimed": false, 00:09:09.740 "zoned": false, 00:09:09.740 "supported_io_types": { 00:09:09.740 "read": true, 00:09:09.740 "write": true, 00:09:09.740 "unmap": false, 00:09:09.740 "flush": false, 00:09:09.740 "reset": true, 00:09:09.740 "nvme_admin": false, 00:09:09.740 "nvme_io": false, 00:09:09.740 "nvme_io_md": false, 00:09:09.740 "write_zeroes": true, 00:09:09.740 "zcopy": false, 00:09:09.740 "get_zone_info": false, 00:09:09.740 "zone_management": false, 00:09:09.740 "zone_append": false, 00:09:09.740 "compare": false, 00:09:09.740 "compare_and_write": false, 00:09:09.740 "abort": false, 00:09:09.740 "seek_hole": false, 00:09:09.740 "seek_data": false, 00:09:09.740 "copy": false, 00:09:09.740 "nvme_iov_md": false 00:09:09.740 }, 00:09:09.740 "memory_domains": [ 00:09:09.740 { 00:09:09.740 "dma_device_id": "system", 00:09:09.740 "dma_device_type": 1 00:09:09.740 }, 00:09:09.740 { 00:09:09.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.740 "dma_device_type": 2 00:09:09.740 }, 00:09:09.740 { 00:09:09.740 "dma_device_id": "system", 00:09:09.740 "dma_device_type": 1 00:09:09.740 }, 00:09:09.740 { 00:09:09.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.740 "dma_device_type": 2 00:09:09.740 }, 00:09:09.740 { 00:09:09.740 "dma_device_id": "system", 00:09:09.740 "dma_device_type": 1 00:09:09.740 }, 00:09:09.740 { 00:09:09.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.740 "dma_device_type": 2 00:09:09.740 } 00:09:09.740 ], 00:09:09.740 "driver_specific": { 00:09:09.740 "raid": { 00:09:09.740 "uuid": "340e09ed-5c93-47f5-905e-89ddbe6972b5", 00:09:09.740 "strip_size_kb": 0, 00:09:09.740 "state": "online", 00:09:09.740 "raid_level": "raid1", 00:09:09.740 "superblock": true, 00:09:09.740 "num_base_bdevs": 3, 00:09:09.740 "num_base_bdevs_discovered": 3, 00:09:09.740 "num_base_bdevs_operational": 3, 00:09:09.740 "base_bdevs_list": [ 00:09:09.740 { 00:09:09.740 "name": "BaseBdev1", 00:09:09.740 "uuid": "256d81b3-6098-4986-b911-de74eb4f7f2e", 00:09:09.740 "is_configured": true, 00:09:09.740 "data_offset": 2048, 00:09:09.740 "data_size": 63488 00:09:09.740 }, 00:09:09.740 { 00:09:09.740 "name": "BaseBdev2", 00:09:09.740 "uuid": "8ccadcb8-b909-4dec-ad3c-fba78810e3d2", 00:09:09.740 "is_configured": true, 00:09:09.740 "data_offset": 2048, 00:09:09.740 "data_size": 63488 00:09:09.740 }, 00:09:09.740 { 00:09:09.740 "name": "BaseBdev3", 00:09:09.740 "uuid": "918a67c4-f930-414d-bbb6-d1e0d112eb99", 00:09:09.740 "is_configured": true, 00:09:09.740 "data_offset": 2048, 00:09:09.740 "data_size": 63488 00:09:09.740 } 00:09:09.740 ] 00:09:09.740 } 00:09:09.740 } 00:09:09.740 }' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:09.740 BaseBdev2 00:09:09.740 BaseBdev3' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.740 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.740 [2024-11-26 12:52:27.409245] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.000 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.000 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:10.000 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:10.000 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.000 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.001 "name": "Existed_Raid", 00:09:10.001 "uuid": "340e09ed-5c93-47f5-905e-89ddbe6972b5", 00:09:10.001 "strip_size_kb": 0, 00:09:10.001 "state": "online", 00:09:10.001 "raid_level": "raid1", 00:09:10.001 "superblock": true, 00:09:10.001 "num_base_bdevs": 3, 00:09:10.001 "num_base_bdevs_discovered": 2, 00:09:10.001 "num_base_bdevs_operational": 2, 00:09:10.001 "base_bdevs_list": [ 00:09:10.001 { 00:09:10.001 "name": null, 00:09:10.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.001 "is_configured": false, 00:09:10.001 "data_offset": 0, 00:09:10.001 "data_size": 63488 00:09:10.001 }, 00:09:10.001 { 00:09:10.001 "name": "BaseBdev2", 00:09:10.001 "uuid": "8ccadcb8-b909-4dec-ad3c-fba78810e3d2", 00:09:10.001 "is_configured": true, 00:09:10.001 "data_offset": 2048, 00:09:10.001 "data_size": 63488 00:09:10.001 }, 00:09:10.001 { 00:09:10.001 "name": "BaseBdev3", 00:09:10.001 "uuid": "918a67c4-f930-414d-bbb6-d1e0d112eb99", 00:09:10.001 "is_configured": true, 00:09:10.001 "data_offset": 2048, 00:09:10.001 "data_size": 63488 00:09:10.001 } 00:09:10.001 ] 00:09:10.001 }' 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.001 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.261 [2024-11-26 12:52:27.877207] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.261 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.262 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.262 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.262 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.262 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.262 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.262 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.523 [2024-11-26 12:52:27.957601] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.523 [2024-11-26 12:52:27.957722] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.523 [2024-11-26 12:52:27.978761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.523 [2024-11-26 12:52:27.978824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.523 [2024-11-26 12:52:27.978842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.523 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.523 BaseBdev2 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.523 [ 00:09:10.523 { 00:09:10.523 "name": "BaseBdev2", 00:09:10.523 "aliases": [ 00:09:10.523 "ff1b8920-8aa3-4396-abb2-59045ab6a24b" 00:09:10.523 ], 00:09:10.523 "product_name": "Malloc disk", 00:09:10.523 "block_size": 512, 00:09:10.523 "num_blocks": 65536, 00:09:10.523 "uuid": "ff1b8920-8aa3-4396-abb2-59045ab6a24b", 00:09:10.523 "assigned_rate_limits": { 00:09:10.523 "rw_ios_per_sec": 0, 00:09:10.523 "rw_mbytes_per_sec": 0, 00:09:10.523 "r_mbytes_per_sec": 0, 00:09:10.523 "w_mbytes_per_sec": 0 00:09:10.523 }, 00:09:10.523 "claimed": false, 00:09:10.523 "zoned": false, 00:09:10.523 "supported_io_types": { 00:09:10.523 "read": true, 00:09:10.523 "write": true, 00:09:10.523 "unmap": true, 00:09:10.523 "flush": true, 00:09:10.523 "reset": true, 00:09:10.523 "nvme_admin": false, 00:09:10.523 "nvme_io": false, 00:09:10.523 "nvme_io_md": false, 00:09:10.523 "write_zeroes": true, 00:09:10.523 "zcopy": true, 00:09:10.523 "get_zone_info": false, 00:09:10.523 "zone_management": false, 00:09:10.523 "zone_append": false, 00:09:10.523 "compare": false, 00:09:10.523 "compare_and_write": false, 00:09:10.523 "abort": true, 00:09:10.523 "seek_hole": false, 00:09:10.523 "seek_data": false, 00:09:10.523 "copy": true, 00:09:10.523 "nvme_iov_md": false 00:09:10.523 }, 00:09:10.523 "memory_domains": [ 00:09:10.523 { 00:09:10.523 "dma_device_id": "system", 00:09:10.523 "dma_device_type": 1 00:09:10.523 }, 00:09:10.523 { 00:09:10.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.523 "dma_device_type": 2 00:09:10.523 } 00:09:10.523 ], 00:09:10.523 "driver_specific": {} 00:09:10.523 } 00:09:10.523 ] 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.523 BaseBdev3 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.523 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.524 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.524 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.524 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.524 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.524 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.524 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.524 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.524 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.524 [ 00:09:10.524 { 00:09:10.524 "name": "BaseBdev3", 00:09:10.524 "aliases": [ 00:09:10.524 "fa38447b-2805-47ae-98e4-2c82d6e4d340" 00:09:10.524 ], 00:09:10.524 "product_name": "Malloc disk", 00:09:10.524 "block_size": 512, 00:09:10.524 "num_blocks": 65536, 00:09:10.524 "uuid": "fa38447b-2805-47ae-98e4-2c82d6e4d340", 00:09:10.524 "assigned_rate_limits": { 00:09:10.524 "rw_ios_per_sec": 0, 00:09:10.524 "rw_mbytes_per_sec": 0, 00:09:10.524 "r_mbytes_per_sec": 0, 00:09:10.524 "w_mbytes_per_sec": 0 00:09:10.524 }, 00:09:10.524 "claimed": false, 00:09:10.524 "zoned": false, 00:09:10.524 "supported_io_types": { 00:09:10.524 "read": true, 00:09:10.524 "write": true, 00:09:10.524 "unmap": true, 00:09:10.524 "flush": true, 00:09:10.524 "reset": true, 00:09:10.524 "nvme_admin": false, 00:09:10.524 "nvme_io": false, 00:09:10.524 "nvme_io_md": false, 00:09:10.524 "write_zeroes": true, 00:09:10.524 "zcopy": true, 00:09:10.524 "get_zone_info": false, 00:09:10.524 "zone_management": false, 00:09:10.524 "zone_append": false, 00:09:10.524 "compare": false, 00:09:10.524 "compare_and_write": false, 00:09:10.524 "abort": true, 00:09:10.524 "seek_hole": false, 00:09:10.524 "seek_data": false, 00:09:10.524 "copy": true, 00:09:10.524 "nvme_iov_md": false 00:09:10.524 }, 00:09:10.524 "memory_domains": [ 00:09:10.524 { 00:09:10.524 "dma_device_id": "system", 00:09:10.524 "dma_device_type": 1 00:09:10.524 }, 00:09:10.524 { 00:09:10.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.524 "dma_device_type": 2 00:09:10.524 } 00:09:10.524 ], 00:09:10.556 "driver_specific": {} 00:09:10.556 } 00:09:10.556 ] 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.556 [2024-11-26 12:52:28.160733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.556 [2024-11-26 12:52:28.160869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.556 [2024-11-26 12:52:28.160895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.556 [2024-11-26 12:52:28.163112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.556 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.816 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.816 "name": "Existed_Raid", 00:09:10.816 "uuid": "1a6e2e29-77ac-417e-bc54-bacf63c92418", 00:09:10.816 "strip_size_kb": 0, 00:09:10.816 "state": "configuring", 00:09:10.816 "raid_level": "raid1", 00:09:10.816 "superblock": true, 00:09:10.816 "num_base_bdevs": 3, 00:09:10.816 "num_base_bdevs_discovered": 2, 00:09:10.816 "num_base_bdevs_operational": 3, 00:09:10.816 "base_bdevs_list": [ 00:09:10.816 { 00:09:10.816 "name": "BaseBdev1", 00:09:10.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.816 "is_configured": false, 00:09:10.816 "data_offset": 0, 00:09:10.816 "data_size": 0 00:09:10.816 }, 00:09:10.816 { 00:09:10.816 "name": "BaseBdev2", 00:09:10.816 "uuid": "ff1b8920-8aa3-4396-abb2-59045ab6a24b", 00:09:10.816 "is_configured": true, 00:09:10.816 "data_offset": 2048, 00:09:10.816 "data_size": 63488 00:09:10.816 }, 00:09:10.816 { 00:09:10.816 "name": "BaseBdev3", 00:09:10.816 "uuid": "fa38447b-2805-47ae-98e4-2c82d6e4d340", 00:09:10.816 "is_configured": true, 00:09:10.816 "data_offset": 2048, 00:09:10.816 "data_size": 63488 00:09:10.816 } 00:09:10.816 ] 00:09:10.816 }' 00:09:10.816 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.816 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.078 [2024-11-26 12:52:28.576049] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.078 "name": "Existed_Raid", 00:09:11.078 "uuid": "1a6e2e29-77ac-417e-bc54-bacf63c92418", 00:09:11.078 "strip_size_kb": 0, 00:09:11.078 "state": "configuring", 00:09:11.078 "raid_level": "raid1", 00:09:11.078 "superblock": true, 00:09:11.078 "num_base_bdevs": 3, 00:09:11.078 "num_base_bdevs_discovered": 1, 00:09:11.078 "num_base_bdevs_operational": 3, 00:09:11.078 "base_bdevs_list": [ 00:09:11.078 { 00:09:11.078 "name": "BaseBdev1", 00:09:11.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.078 "is_configured": false, 00:09:11.078 "data_offset": 0, 00:09:11.078 "data_size": 0 00:09:11.078 }, 00:09:11.078 { 00:09:11.078 "name": null, 00:09:11.078 "uuid": "ff1b8920-8aa3-4396-abb2-59045ab6a24b", 00:09:11.078 "is_configured": false, 00:09:11.078 "data_offset": 0, 00:09:11.078 "data_size": 63488 00:09:11.078 }, 00:09:11.078 { 00:09:11.078 "name": "BaseBdev3", 00:09:11.078 "uuid": "fa38447b-2805-47ae-98e4-2c82d6e4d340", 00:09:11.078 "is_configured": true, 00:09:11.078 "data_offset": 2048, 00:09:11.078 "data_size": 63488 00:09:11.078 } 00:09:11.078 ] 00:09:11.078 }' 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.078 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.338 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.338 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.338 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.338 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.600 [2024-11-26 12:52:29.052274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.600 BaseBdev1 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.600 [ 00:09:11.600 { 00:09:11.600 "name": "BaseBdev1", 00:09:11.600 "aliases": [ 00:09:11.600 "946491f4-8a71-4b5a-a99f-a8cbd622bd3f" 00:09:11.600 ], 00:09:11.600 "product_name": "Malloc disk", 00:09:11.600 "block_size": 512, 00:09:11.600 "num_blocks": 65536, 00:09:11.600 "uuid": "946491f4-8a71-4b5a-a99f-a8cbd622bd3f", 00:09:11.600 "assigned_rate_limits": { 00:09:11.600 "rw_ios_per_sec": 0, 00:09:11.600 "rw_mbytes_per_sec": 0, 00:09:11.600 "r_mbytes_per_sec": 0, 00:09:11.600 "w_mbytes_per_sec": 0 00:09:11.600 }, 00:09:11.600 "claimed": true, 00:09:11.600 "claim_type": "exclusive_write", 00:09:11.600 "zoned": false, 00:09:11.600 "supported_io_types": { 00:09:11.600 "read": true, 00:09:11.600 "write": true, 00:09:11.600 "unmap": true, 00:09:11.600 "flush": true, 00:09:11.600 "reset": true, 00:09:11.600 "nvme_admin": false, 00:09:11.600 "nvme_io": false, 00:09:11.600 "nvme_io_md": false, 00:09:11.600 "write_zeroes": true, 00:09:11.600 "zcopy": true, 00:09:11.600 "get_zone_info": false, 00:09:11.600 "zone_management": false, 00:09:11.600 "zone_append": false, 00:09:11.600 "compare": false, 00:09:11.600 "compare_and_write": false, 00:09:11.600 "abort": true, 00:09:11.600 "seek_hole": false, 00:09:11.600 "seek_data": false, 00:09:11.600 "copy": true, 00:09:11.600 "nvme_iov_md": false 00:09:11.600 }, 00:09:11.600 "memory_domains": [ 00:09:11.600 { 00:09:11.600 "dma_device_id": "system", 00:09:11.600 "dma_device_type": 1 00:09:11.600 }, 00:09:11.600 { 00:09:11.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.600 "dma_device_type": 2 00:09:11.600 } 00:09:11.600 ], 00:09:11.600 "driver_specific": {} 00:09:11.600 } 00:09:11.600 ] 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.600 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.600 "name": "Existed_Raid", 00:09:11.600 "uuid": "1a6e2e29-77ac-417e-bc54-bacf63c92418", 00:09:11.600 "strip_size_kb": 0, 00:09:11.601 "state": "configuring", 00:09:11.601 "raid_level": "raid1", 00:09:11.601 "superblock": true, 00:09:11.601 "num_base_bdevs": 3, 00:09:11.601 "num_base_bdevs_discovered": 2, 00:09:11.601 "num_base_bdevs_operational": 3, 00:09:11.601 "base_bdevs_list": [ 00:09:11.601 { 00:09:11.601 "name": "BaseBdev1", 00:09:11.601 "uuid": "946491f4-8a71-4b5a-a99f-a8cbd622bd3f", 00:09:11.601 "is_configured": true, 00:09:11.601 "data_offset": 2048, 00:09:11.601 "data_size": 63488 00:09:11.601 }, 00:09:11.601 { 00:09:11.601 "name": null, 00:09:11.601 "uuid": "ff1b8920-8aa3-4396-abb2-59045ab6a24b", 00:09:11.601 "is_configured": false, 00:09:11.601 "data_offset": 0, 00:09:11.601 "data_size": 63488 00:09:11.601 }, 00:09:11.601 { 00:09:11.601 "name": "BaseBdev3", 00:09:11.601 "uuid": "fa38447b-2805-47ae-98e4-2c82d6e4d340", 00:09:11.601 "is_configured": true, 00:09:11.601 "data_offset": 2048, 00:09:11.601 "data_size": 63488 00:09:11.601 } 00:09:11.601 ] 00:09:11.601 }' 00:09:11.601 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.601 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.171 [2024-11-26 12:52:29.599420] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.171 "name": "Existed_Raid", 00:09:12.171 "uuid": "1a6e2e29-77ac-417e-bc54-bacf63c92418", 00:09:12.171 "strip_size_kb": 0, 00:09:12.171 "state": "configuring", 00:09:12.171 "raid_level": "raid1", 00:09:12.171 "superblock": true, 00:09:12.171 "num_base_bdevs": 3, 00:09:12.171 "num_base_bdevs_discovered": 1, 00:09:12.171 "num_base_bdevs_operational": 3, 00:09:12.171 "base_bdevs_list": [ 00:09:12.171 { 00:09:12.171 "name": "BaseBdev1", 00:09:12.171 "uuid": "946491f4-8a71-4b5a-a99f-a8cbd622bd3f", 00:09:12.171 "is_configured": true, 00:09:12.171 "data_offset": 2048, 00:09:12.171 "data_size": 63488 00:09:12.171 }, 00:09:12.171 { 00:09:12.171 "name": null, 00:09:12.171 "uuid": "ff1b8920-8aa3-4396-abb2-59045ab6a24b", 00:09:12.171 "is_configured": false, 00:09:12.171 "data_offset": 0, 00:09:12.171 "data_size": 63488 00:09:12.171 }, 00:09:12.171 { 00:09:12.171 "name": null, 00:09:12.171 "uuid": "fa38447b-2805-47ae-98e4-2c82d6e4d340", 00:09:12.171 "is_configured": false, 00:09:12.171 "data_offset": 0, 00:09:12.171 "data_size": 63488 00:09:12.171 } 00:09:12.171 ] 00:09:12.171 }' 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.171 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.430 [2024-11-26 12:52:30.083394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.430 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.689 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.689 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.689 "name": "Existed_Raid", 00:09:12.689 "uuid": "1a6e2e29-77ac-417e-bc54-bacf63c92418", 00:09:12.689 "strip_size_kb": 0, 00:09:12.689 "state": "configuring", 00:09:12.689 "raid_level": "raid1", 00:09:12.689 "superblock": true, 00:09:12.689 "num_base_bdevs": 3, 00:09:12.689 "num_base_bdevs_discovered": 2, 00:09:12.689 "num_base_bdevs_operational": 3, 00:09:12.689 "base_bdevs_list": [ 00:09:12.689 { 00:09:12.689 "name": "BaseBdev1", 00:09:12.689 "uuid": "946491f4-8a71-4b5a-a99f-a8cbd622bd3f", 00:09:12.689 "is_configured": true, 00:09:12.689 "data_offset": 2048, 00:09:12.689 "data_size": 63488 00:09:12.689 }, 00:09:12.689 { 00:09:12.689 "name": null, 00:09:12.689 "uuid": "ff1b8920-8aa3-4396-abb2-59045ab6a24b", 00:09:12.689 "is_configured": false, 00:09:12.689 "data_offset": 0, 00:09:12.689 "data_size": 63488 00:09:12.689 }, 00:09:12.689 { 00:09:12.689 "name": "BaseBdev3", 00:09:12.689 "uuid": "fa38447b-2805-47ae-98e4-2c82d6e4d340", 00:09:12.689 "is_configured": true, 00:09:12.689 "data_offset": 2048, 00:09:12.689 "data_size": 63488 00:09:12.689 } 00:09:12.689 ] 00:09:12.689 }' 00:09:12.689 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.689 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 [2024-11-26 12:52:30.571391] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.209 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.209 "name": "Existed_Raid", 00:09:13.209 "uuid": "1a6e2e29-77ac-417e-bc54-bacf63c92418", 00:09:13.209 "strip_size_kb": 0, 00:09:13.209 "state": "configuring", 00:09:13.209 "raid_level": "raid1", 00:09:13.209 "superblock": true, 00:09:13.209 "num_base_bdevs": 3, 00:09:13.209 "num_base_bdevs_discovered": 1, 00:09:13.209 "num_base_bdevs_operational": 3, 00:09:13.209 "base_bdevs_list": [ 00:09:13.209 { 00:09:13.209 "name": null, 00:09:13.209 "uuid": "946491f4-8a71-4b5a-a99f-a8cbd622bd3f", 00:09:13.209 "is_configured": false, 00:09:13.209 "data_offset": 0, 00:09:13.209 "data_size": 63488 00:09:13.209 }, 00:09:13.209 { 00:09:13.209 "name": null, 00:09:13.209 "uuid": "ff1b8920-8aa3-4396-abb2-59045ab6a24b", 00:09:13.209 "is_configured": false, 00:09:13.209 "data_offset": 0, 00:09:13.209 "data_size": 63488 00:09:13.209 }, 00:09:13.209 { 00:09:13.209 "name": "BaseBdev3", 00:09:13.209 "uuid": "fa38447b-2805-47ae-98e4-2c82d6e4d340", 00:09:13.209 "is_configured": true, 00:09:13.209 "data_offset": 2048, 00:09:13.209 "data_size": 63488 00:09:13.209 } 00:09:13.209 ] 00:09:13.209 }' 00:09:13.209 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.209 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.468 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.468 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.468 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.468 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.468 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.469 [2024-11-26 12:52:31.090326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.469 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.728 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.728 "name": "Existed_Raid", 00:09:13.728 "uuid": "1a6e2e29-77ac-417e-bc54-bacf63c92418", 00:09:13.728 "strip_size_kb": 0, 00:09:13.728 "state": "configuring", 00:09:13.728 "raid_level": "raid1", 00:09:13.728 "superblock": true, 00:09:13.728 "num_base_bdevs": 3, 00:09:13.728 "num_base_bdevs_discovered": 2, 00:09:13.728 "num_base_bdevs_operational": 3, 00:09:13.728 "base_bdevs_list": [ 00:09:13.728 { 00:09:13.728 "name": null, 00:09:13.728 "uuid": "946491f4-8a71-4b5a-a99f-a8cbd622bd3f", 00:09:13.728 "is_configured": false, 00:09:13.728 "data_offset": 0, 00:09:13.728 "data_size": 63488 00:09:13.728 }, 00:09:13.728 { 00:09:13.728 "name": "BaseBdev2", 00:09:13.728 "uuid": "ff1b8920-8aa3-4396-abb2-59045ab6a24b", 00:09:13.728 "is_configured": true, 00:09:13.728 "data_offset": 2048, 00:09:13.728 "data_size": 63488 00:09:13.728 }, 00:09:13.728 { 00:09:13.728 "name": "BaseBdev3", 00:09:13.728 "uuid": "fa38447b-2805-47ae-98e4-2c82d6e4d340", 00:09:13.728 "is_configured": true, 00:09:13.728 "data_offset": 2048, 00:09:13.728 "data_size": 63488 00:09:13.728 } 00:09:13.728 ] 00:09:13.728 }' 00:09:13.728 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.728 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 946491f4-8a71-4b5a-a99f-a8cbd622bd3f 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.987 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.987 [2024-11-26 12:52:31.650110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:13.987 [2024-11-26 12:52:31.650352] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:13.987 [2024-11-26 12:52:31.650372] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:13.988 [2024-11-26 12:52:31.650663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:13.988 [2024-11-26 12:52:31.650823] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:13.988 [2024-11-26 12:52:31.650838] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:13.988 NewBaseBdev 00:09:13.988 [2024-11-26 12:52:31.650945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.988 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.247 [ 00:09:14.247 { 00:09:14.247 "name": "NewBaseBdev", 00:09:14.247 "aliases": [ 00:09:14.247 "946491f4-8a71-4b5a-a99f-a8cbd622bd3f" 00:09:14.247 ], 00:09:14.247 "product_name": "Malloc disk", 00:09:14.247 "block_size": 512, 00:09:14.247 "num_blocks": 65536, 00:09:14.247 "uuid": "946491f4-8a71-4b5a-a99f-a8cbd622bd3f", 00:09:14.247 "assigned_rate_limits": { 00:09:14.247 "rw_ios_per_sec": 0, 00:09:14.247 "rw_mbytes_per_sec": 0, 00:09:14.247 "r_mbytes_per_sec": 0, 00:09:14.247 "w_mbytes_per_sec": 0 00:09:14.247 }, 00:09:14.247 "claimed": true, 00:09:14.247 "claim_type": "exclusive_write", 00:09:14.247 "zoned": false, 00:09:14.247 "supported_io_types": { 00:09:14.247 "read": true, 00:09:14.247 "write": true, 00:09:14.247 "unmap": true, 00:09:14.247 "flush": true, 00:09:14.247 "reset": true, 00:09:14.247 "nvme_admin": false, 00:09:14.247 "nvme_io": false, 00:09:14.247 "nvme_io_md": false, 00:09:14.247 "write_zeroes": true, 00:09:14.247 "zcopy": true, 00:09:14.247 "get_zone_info": false, 00:09:14.247 "zone_management": false, 00:09:14.247 "zone_append": false, 00:09:14.247 "compare": false, 00:09:14.247 "compare_and_write": false, 00:09:14.247 "abort": true, 00:09:14.247 "seek_hole": false, 00:09:14.247 "seek_data": false, 00:09:14.247 "copy": true, 00:09:14.247 "nvme_iov_md": false 00:09:14.248 }, 00:09:14.248 "memory_domains": [ 00:09:14.248 { 00:09:14.248 "dma_device_id": "system", 00:09:14.248 "dma_device_type": 1 00:09:14.248 }, 00:09:14.248 { 00:09:14.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.248 "dma_device_type": 2 00:09:14.248 } 00:09:14.248 ], 00:09:14.248 "driver_specific": {} 00:09:14.248 } 00:09:14.248 ] 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.248 "name": "Existed_Raid", 00:09:14.248 "uuid": "1a6e2e29-77ac-417e-bc54-bacf63c92418", 00:09:14.248 "strip_size_kb": 0, 00:09:14.248 "state": "online", 00:09:14.248 "raid_level": "raid1", 00:09:14.248 "superblock": true, 00:09:14.248 "num_base_bdevs": 3, 00:09:14.248 "num_base_bdevs_discovered": 3, 00:09:14.248 "num_base_bdevs_operational": 3, 00:09:14.248 "base_bdevs_list": [ 00:09:14.248 { 00:09:14.248 "name": "NewBaseBdev", 00:09:14.248 "uuid": "946491f4-8a71-4b5a-a99f-a8cbd622bd3f", 00:09:14.248 "is_configured": true, 00:09:14.248 "data_offset": 2048, 00:09:14.248 "data_size": 63488 00:09:14.248 }, 00:09:14.248 { 00:09:14.248 "name": "BaseBdev2", 00:09:14.248 "uuid": "ff1b8920-8aa3-4396-abb2-59045ab6a24b", 00:09:14.248 "is_configured": true, 00:09:14.248 "data_offset": 2048, 00:09:14.248 "data_size": 63488 00:09:14.248 }, 00:09:14.248 { 00:09:14.248 "name": "BaseBdev3", 00:09:14.248 "uuid": "fa38447b-2805-47ae-98e4-2c82d6e4d340", 00:09:14.248 "is_configured": true, 00:09:14.248 "data_offset": 2048, 00:09:14.248 "data_size": 63488 00:09:14.248 } 00:09:14.248 ] 00:09:14.248 }' 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.248 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.508 [2024-11-26 12:52:32.137650] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.508 "name": "Existed_Raid", 00:09:14.508 "aliases": [ 00:09:14.508 "1a6e2e29-77ac-417e-bc54-bacf63c92418" 00:09:14.508 ], 00:09:14.508 "product_name": "Raid Volume", 00:09:14.508 "block_size": 512, 00:09:14.508 "num_blocks": 63488, 00:09:14.508 "uuid": "1a6e2e29-77ac-417e-bc54-bacf63c92418", 00:09:14.508 "assigned_rate_limits": { 00:09:14.508 "rw_ios_per_sec": 0, 00:09:14.508 "rw_mbytes_per_sec": 0, 00:09:14.508 "r_mbytes_per_sec": 0, 00:09:14.508 "w_mbytes_per_sec": 0 00:09:14.508 }, 00:09:14.508 "claimed": false, 00:09:14.508 "zoned": false, 00:09:14.508 "supported_io_types": { 00:09:14.508 "read": true, 00:09:14.508 "write": true, 00:09:14.508 "unmap": false, 00:09:14.508 "flush": false, 00:09:14.508 "reset": true, 00:09:14.508 "nvme_admin": false, 00:09:14.508 "nvme_io": false, 00:09:14.508 "nvme_io_md": false, 00:09:14.508 "write_zeroes": true, 00:09:14.508 "zcopy": false, 00:09:14.508 "get_zone_info": false, 00:09:14.508 "zone_management": false, 00:09:14.508 "zone_append": false, 00:09:14.508 "compare": false, 00:09:14.508 "compare_and_write": false, 00:09:14.508 "abort": false, 00:09:14.508 "seek_hole": false, 00:09:14.508 "seek_data": false, 00:09:14.508 "copy": false, 00:09:14.508 "nvme_iov_md": false 00:09:14.508 }, 00:09:14.508 "memory_domains": [ 00:09:14.508 { 00:09:14.508 "dma_device_id": "system", 00:09:14.508 "dma_device_type": 1 00:09:14.508 }, 00:09:14.508 { 00:09:14.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.508 "dma_device_type": 2 00:09:14.508 }, 00:09:14.508 { 00:09:14.508 "dma_device_id": "system", 00:09:14.508 "dma_device_type": 1 00:09:14.508 }, 00:09:14.508 { 00:09:14.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.508 "dma_device_type": 2 00:09:14.508 }, 00:09:14.508 { 00:09:14.508 "dma_device_id": "system", 00:09:14.508 "dma_device_type": 1 00:09:14.508 }, 00:09:14.508 { 00:09:14.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.508 "dma_device_type": 2 00:09:14.508 } 00:09:14.508 ], 00:09:14.508 "driver_specific": { 00:09:14.508 "raid": { 00:09:14.508 "uuid": "1a6e2e29-77ac-417e-bc54-bacf63c92418", 00:09:14.508 "strip_size_kb": 0, 00:09:14.508 "state": "online", 00:09:14.508 "raid_level": "raid1", 00:09:14.508 "superblock": true, 00:09:14.508 "num_base_bdevs": 3, 00:09:14.508 "num_base_bdevs_discovered": 3, 00:09:14.508 "num_base_bdevs_operational": 3, 00:09:14.508 "base_bdevs_list": [ 00:09:14.508 { 00:09:14.508 "name": "NewBaseBdev", 00:09:14.508 "uuid": "946491f4-8a71-4b5a-a99f-a8cbd622bd3f", 00:09:14.508 "is_configured": true, 00:09:14.508 "data_offset": 2048, 00:09:14.508 "data_size": 63488 00:09:14.508 }, 00:09:14.508 { 00:09:14.508 "name": "BaseBdev2", 00:09:14.508 "uuid": "ff1b8920-8aa3-4396-abb2-59045ab6a24b", 00:09:14.508 "is_configured": true, 00:09:14.508 "data_offset": 2048, 00:09:14.508 "data_size": 63488 00:09:14.508 }, 00:09:14.508 { 00:09:14.508 "name": "BaseBdev3", 00:09:14.508 "uuid": "fa38447b-2805-47ae-98e4-2c82d6e4d340", 00:09:14.508 "is_configured": true, 00:09:14.508 "data_offset": 2048, 00:09:14.508 "data_size": 63488 00:09:14.508 } 00:09:14.508 ] 00:09:14.508 } 00:09:14.508 } 00:09:14.508 }' 00:09:14.508 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.767 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:14.768 BaseBdev2 00:09:14.768 BaseBdev3' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.768 [2024-11-26 12:52:32.396908] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.768 [2024-11-26 12:52:32.397021] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.768 [2024-11-26 12:52:32.397143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.768 [2024-11-26 12:52:32.397474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.768 [2024-11-26 12:52:32.397530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79284 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79284 ']' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79284 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79284 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.768 killing process with pid 79284 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79284' 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79284 00:09:14.768 [2024-11-26 12:52:32.440784] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.768 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79284 00:09:15.027 [2024-11-26 12:52:32.499902] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.288 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:15.288 00:09:15.288 real 0m8.920s 00:09:15.288 user 0m14.941s 00:09:15.288 sys 0m1.926s 00:09:15.288 ************************************ 00:09:15.288 END TEST raid_state_function_test_sb 00:09:15.288 ************************************ 00:09:15.288 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.288 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.288 12:52:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:15.288 12:52:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:15.288 12:52:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.288 12:52:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.288 ************************************ 00:09:15.288 START TEST raid_superblock_test 00:09:15.288 ************************************ 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79888 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79888 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79888 ']' 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.288 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.288 [2024-11-26 12:52:32.958537] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:15.288 [2024-11-26 12:52:32.958748] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79888 ] 00:09:15.547 [2024-11-26 12:52:33.123904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.547 [2024-11-26 12:52:33.168154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.547 [2024-11-26 12:52:33.210398] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.547 [2024-11-26 12:52:33.210510] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.142 malloc1 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.142 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.402 [2024-11-26 12:52:33.821091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:16.402 [2024-11-26 12:52:33.821235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.402 [2024-11-26 12:52:33.821282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:16.402 [2024-11-26 12:52:33.821326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.402 [2024-11-26 12:52:33.823408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.402 [2024-11-26 12:52:33.823484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:16.402 pt1 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.403 malloc2 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.403 [2024-11-26 12:52:33.867157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.403 [2024-11-26 12:52:33.867392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.403 [2024-11-26 12:52:33.867440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:16.403 [2024-11-26 12:52:33.867466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.403 [2024-11-26 12:52:33.872254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.403 [2024-11-26 12:52:33.872406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.403 pt2 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.403 malloc3 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.403 [2024-11-26 12:52:33.901929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:16.403 [2024-11-26 12:52:33.902010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.403 [2024-11-26 12:52:33.902060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:16.403 [2024-11-26 12:52:33.902088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.403 [2024-11-26 12:52:33.904108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.403 [2024-11-26 12:52:33.904200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:16.403 pt3 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.403 [2024-11-26 12:52:33.913970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:16.403 [2024-11-26 12:52:33.915790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.403 [2024-11-26 12:52:33.915892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:16.403 [2024-11-26 12:52:33.916074] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:16.403 [2024-11-26 12:52:33.916122] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:16.403 [2024-11-26 12:52:33.916395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:16.403 [2024-11-26 12:52:33.916572] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:16.403 [2024-11-26 12:52:33.916620] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:16.403 [2024-11-26 12:52:33.916767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.403 "name": "raid_bdev1", 00:09:16.403 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:16.403 "strip_size_kb": 0, 00:09:16.403 "state": "online", 00:09:16.403 "raid_level": "raid1", 00:09:16.403 "superblock": true, 00:09:16.403 "num_base_bdevs": 3, 00:09:16.403 "num_base_bdevs_discovered": 3, 00:09:16.403 "num_base_bdevs_operational": 3, 00:09:16.403 "base_bdevs_list": [ 00:09:16.403 { 00:09:16.403 "name": "pt1", 00:09:16.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.403 "is_configured": true, 00:09:16.403 "data_offset": 2048, 00:09:16.403 "data_size": 63488 00:09:16.403 }, 00:09:16.403 { 00:09:16.403 "name": "pt2", 00:09:16.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.403 "is_configured": true, 00:09:16.403 "data_offset": 2048, 00:09:16.403 "data_size": 63488 00:09:16.403 }, 00:09:16.403 { 00:09:16.403 "name": "pt3", 00:09:16.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.403 "is_configured": true, 00:09:16.403 "data_offset": 2048, 00:09:16.403 "data_size": 63488 00:09:16.403 } 00:09:16.403 ] 00:09:16.403 }' 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.403 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.974 [2024-11-26 12:52:34.357523] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.974 "name": "raid_bdev1", 00:09:16.974 "aliases": [ 00:09:16.974 "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93" 00:09:16.974 ], 00:09:16.974 "product_name": "Raid Volume", 00:09:16.974 "block_size": 512, 00:09:16.974 "num_blocks": 63488, 00:09:16.974 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:16.974 "assigned_rate_limits": { 00:09:16.974 "rw_ios_per_sec": 0, 00:09:16.974 "rw_mbytes_per_sec": 0, 00:09:16.974 "r_mbytes_per_sec": 0, 00:09:16.974 "w_mbytes_per_sec": 0 00:09:16.974 }, 00:09:16.974 "claimed": false, 00:09:16.974 "zoned": false, 00:09:16.974 "supported_io_types": { 00:09:16.974 "read": true, 00:09:16.974 "write": true, 00:09:16.974 "unmap": false, 00:09:16.974 "flush": false, 00:09:16.974 "reset": true, 00:09:16.974 "nvme_admin": false, 00:09:16.974 "nvme_io": false, 00:09:16.974 "nvme_io_md": false, 00:09:16.974 "write_zeroes": true, 00:09:16.974 "zcopy": false, 00:09:16.974 "get_zone_info": false, 00:09:16.974 "zone_management": false, 00:09:16.974 "zone_append": false, 00:09:16.974 "compare": false, 00:09:16.974 "compare_and_write": false, 00:09:16.974 "abort": false, 00:09:16.974 "seek_hole": false, 00:09:16.974 "seek_data": false, 00:09:16.974 "copy": false, 00:09:16.974 "nvme_iov_md": false 00:09:16.974 }, 00:09:16.974 "memory_domains": [ 00:09:16.974 { 00:09:16.974 "dma_device_id": "system", 00:09:16.974 "dma_device_type": 1 00:09:16.974 }, 00:09:16.974 { 00:09:16.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.974 "dma_device_type": 2 00:09:16.974 }, 00:09:16.974 { 00:09:16.974 "dma_device_id": "system", 00:09:16.974 "dma_device_type": 1 00:09:16.974 }, 00:09:16.974 { 00:09:16.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.974 "dma_device_type": 2 00:09:16.974 }, 00:09:16.974 { 00:09:16.974 "dma_device_id": "system", 00:09:16.974 "dma_device_type": 1 00:09:16.974 }, 00:09:16.974 { 00:09:16.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.974 "dma_device_type": 2 00:09:16.974 } 00:09:16.974 ], 00:09:16.974 "driver_specific": { 00:09:16.974 "raid": { 00:09:16.974 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:16.974 "strip_size_kb": 0, 00:09:16.974 "state": "online", 00:09:16.974 "raid_level": "raid1", 00:09:16.974 "superblock": true, 00:09:16.974 "num_base_bdevs": 3, 00:09:16.974 "num_base_bdevs_discovered": 3, 00:09:16.974 "num_base_bdevs_operational": 3, 00:09:16.974 "base_bdevs_list": [ 00:09:16.974 { 00:09:16.974 "name": "pt1", 00:09:16.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.974 "is_configured": true, 00:09:16.974 "data_offset": 2048, 00:09:16.974 "data_size": 63488 00:09:16.974 }, 00:09:16.974 { 00:09:16.974 "name": "pt2", 00:09:16.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.974 "is_configured": true, 00:09:16.974 "data_offset": 2048, 00:09:16.974 "data_size": 63488 00:09:16.974 }, 00:09:16.974 { 00:09:16.974 "name": "pt3", 00:09:16.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.974 "is_configured": true, 00:09:16.974 "data_offset": 2048, 00:09:16.974 "data_size": 63488 00:09:16.974 } 00:09:16.974 ] 00:09:16.974 } 00:09:16.974 } 00:09:16.974 }' 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:16.974 pt2 00:09:16.974 pt3' 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.974 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.975 [2024-11-26 12:52:34.593044] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4bc3eae2-6b24-4cf2-8bf3-ab132e34de93 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4bc3eae2-6b24-4cf2-8bf3-ab132e34de93 ']' 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.975 [2024-11-26 12:52:34.640708] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.975 [2024-11-26 12:52:34.640766] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.975 [2024-11-26 12:52:34.640897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.975 [2024-11-26 12:52:34.640983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.975 [2024-11-26 12:52:34.641051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:16.975 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:17.235 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.236 [2024-11-26 12:52:34.772526] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:17.236 [2024-11-26 12:52:34.774328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:17.236 request: 00:09:17.236 [2024-11-26 12:52:34.774419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:17.236 [2024-11-26 12:52:34.774474] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:17.236 [2024-11-26 12:52:34.774513] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:17.236 [2024-11-26 12:52:34.774531] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:17.236 [2024-11-26 12:52:34.774543] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.236 [2024-11-26 12:52:34.774561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:17.236 { 00:09:17.236 "name": "raid_bdev1", 00:09:17.236 "raid_level": "raid1", 00:09:17.236 "base_bdevs": [ 00:09:17.236 "malloc1", 00:09:17.236 "malloc2", 00:09:17.236 "malloc3" 00:09:17.236 ], 00:09:17.236 "superblock": false, 00:09:17.236 "method": "bdev_raid_create", 00:09:17.236 "req_id": 1 00:09:17.236 } 00:09:17.236 Got JSON-RPC error response 00:09:17.236 response: 00:09:17.236 { 00:09:17.236 "code": -17, 00:09:17.236 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:17.236 } 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.236 [2024-11-26 12:52:34.832400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:17.236 [2024-11-26 12:52:34.832460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.236 [2024-11-26 12:52:34.832479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:17.236 [2024-11-26 12:52:34.832489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.236 [2024-11-26 12:52:34.834520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.236 [2024-11-26 12:52:34.834557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:17.236 [2024-11-26 12:52:34.834617] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:17.236 [2024-11-26 12:52:34.834647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:17.236 pt1 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.236 "name": "raid_bdev1", 00:09:17.236 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:17.236 "strip_size_kb": 0, 00:09:17.236 "state": "configuring", 00:09:17.236 "raid_level": "raid1", 00:09:17.236 "superblock": true, 00:09:17.236 "num_base_bdevs": 3, 00:09:17.236 "num_base_bdevs_discovered": 1, 00:09:17.236 "num_base_bdevs_operational": 3, 00:09:17.236 "base_bdevs_list": [ 00:09:17.236 { 00:09:17.236 "name": "pt1", 00:09:17.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.236 "is_configured": true, 00:09:17.236 "data_offset": 2048, 00:09:17.236 "data_size": 63488 00:09:17.236 }, 00:09:17.236 { 00:09:17.236 "name": null, 00:09:17.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.236 "is_configured": false, 00:09:17.236 "data_offset": 2048, 00:09:17.236 "data_size": 63488 00:09:17.236 }, 00:09:17.236 { 00:09:17.236 "name": null, 00:09:17.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.236 "is_configured": false, 00:09:17.236 "data_offset": 2048, 00:09:17.236 "data_size": 63488 00:09:17.236 } 00:09:17.236 ] 00:09:17.236 }' 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.236 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.806 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:17.806 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.806 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.806 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.806 [2024-11-26 12:52:35.219744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.806 [2024-11-26 12:52:35.219838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.806 [2024-11-26 12:52:35.219873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:17.807 [2024-11-26 12:52:35.219927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.807 [2024-11-26 12:52:35.220309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.807 [2024-11-26 12:52:35.220367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.807 [2024-11-26 12:52:35.220468] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:17.807 [2024-11-26 12:52:35.220517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.807 pt2 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.807 [2024-11-26 12:52:35.231732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.807 "name": "raid_bdev1", 00:09:17.807 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:17.807 "strip_size_kb": 0, 00:09:17.807 "state": "configuring", 00:09:17.807 "raid_level": "raid1", 00:09:17.807 "superblock": true, 00:09:17.807 "num_base_bdevs": 3, 00:09:17.807 "num_base_bdevs_discovered": 1, 00:09:17.807 "num_base_bdevs_operational": 3, 00:09:17.807 "base_bdevs_list": [ 00:09:17.807 { 00:09:17.807 "name": "pt1", 00:09:17.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.807 "is_configured": true, 00:09:17.807 "data_offset": 2048, 00:09:17.807 "data_size": 63488 00:09:17.807 }, 00:09:17.807 { 00:09:17.807 "name": null, 00:09:17.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.807 "is_configured": false, 00:09:17.807 "data_offset": 0, 00:09:17.807 "data_size": 63488 00:09:17.807 }, 00:09:17.807 { 00:09:17.807 "name": null, 00:09:17.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.807 "is_configured": false, 00:09:17.807 "data_offset": 2048, 00:09:17.807 "data_size": 63488 00:09:17.807 } 00:09:17.807 ] 00:09:17.807 }' 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.807 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.067 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:18.067 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.067 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.067 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.067 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.067 [2024-11-26 12:52:35.615074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.067 [2024-11-26 12:52:35.615164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.067 [2024-11-26 12:52:35.615200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:18.067 [2024-11-26 12:52:35.615210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.067 [2024-11-26 12:52:35.615571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.067 [2024-11-26 12:52:35.615587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.067 [2024-11-26 12:52:35.615648] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:18.067 [2024-11-26 12:52:35.615672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.067 pt2 00:09:18.067 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.067 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.067 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.067 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.068 [2024-11-26 12:52:35.627038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:18.068 [2024-11-26 12:52:35.627079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.068 [2024-11-26 12:52:35.627096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:18.068 [2024-11-26 12:52:35.627104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.068 [2024-11-26 12:52:35.627422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.068 [2024-11-26 12:52:35.627438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:18.068 [2024-11-26 12:52:35.627490] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:18.068 [2024-11-26 12:52:35.627506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:18.068 [2024-11-26 12:52:35.627590] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:18.068 [2024-11-26 12:52:35.627598] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:18.068 [2024-11-26 12:52:35.627800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:18.068 [2024-11-26 12:52:35.627911] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:18.068 [2024-11-26 12:52:35.627923] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:18.068 [2024-11-26 12:52:35.628013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.068 pt3 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.068 "name": "raid_bdev1", 00:09:18.068 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:18.068 "strip_size_kb": 0, 00:09:18.068 "state": "online", 00:09:18.068 "raid_level": "raid1", 00:09:18.068 "superblock": true, 00:09:18.068 "num_base_bdevs": 3, 00:09:18.068 "num_base_bdevs_discovered": 3, 00:09:18.068 "num_base_bdevs_operational": 3, 00:09:18.068 "base_bdevs_list": [ 00:09:18.068 { 00:09:18.068 "name": "pt1", 00:09:18.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.068 "is_configured": true, 00:09:18.068 "data_offset": 2048, 00:09:18.068 "data_size": 63488 00:09:18.068 }, 00:09:18.068 { 00:09:18.068 "name": "pt2", 00:09:18.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.068 "is_configured": true, 00:09:18.068 "data_offset": 2048, 00:09:18.068 "data_size": 63488 00:09:18.068 }, 00:09:18.068 { 00:09:18.068 "name": "pt3", 00:09:18.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.068 "is_configured": true, 00:09:18.068 "data_offset": 2048, 00:09:18.068 "data_size": 63488 00:09:18.068 } 00:09:18.068 ] 00:09:18.068 }' 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.068 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.638 [2024-11-26 12:52:36.062546] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.638 "name": "raid_bdev1", 00:09:18.638 "aliases": [ 00:09:18.638 "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93" 00:09:18.638 ], 00:09:18.638 "product_name": "Raid Volume", 00:09:18.638 "block_size": 512, 00:09:18.638 "num_blocks": 63488, 00:09:18.638 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:18.638 "assigned_rate_limits": { 00:09:18.638 "rw_ios_per_sec": 0, 00:09:18.638 "rw_mbytes_per_sec": 0, 00:09:18.638 "r_mbytes_per_sec": 0, 00:09:18.638 "w_mbytes_per_sec": 0 00:09:18.638 }, 00:09:18.638 "claimed": false, 00:09:18.638 "zoned": false, 00:09:18.638 "supported_io_types": { 00:09:18.638 "read": true, 00:09:18.638 "write": true, 00:09:18.638 "unmap": false, 00:09:18.638 "flush": false, 00:09:18.638 "reset": true, 00:09:18.638 "nvme_admin": false, 00:09:18.638 "nvme_io": false, 00:09:18.638 "nvme_io_md": false, 00:09:18.638 "write_zeroes": true, 00:09:18.638 "zcopy": false, 00:09:18.638 "get_zone_info": false, 00:09:18.638 "zone_management": false, 00:09:18.638 "zone_append": false, 00:09:18.638 "compare": false, 00:09:18.638 "compare_and_write": false, 00:09:18.638 "abort": false, 00:09:18.638 "seek_hole": false, 00:09:18.638 "seek_data": false, 00:09:18.638 "copy": false, 00:09:18.638 "nvme_iov_md": false 00:09:18.638 }, 00:09:18.638 "memory_domains": [ 00:09:18.638 { 00:09:18.638 "dma_device_id": "system", 00:09:18.638 "dma_device_type": 1 00:09:18.638 }, 00:09:18.638 { 00:09:18.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.638 "dma_device_type": 2 00:09:18.638 }, 00:09:18.638 { 00:09:18.638 "dma_device_id": "system", 00:09:18.638 "dma_device_type": 1 00:09:18.638 }, 00:09:18.638 { 00:09:18.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.638 "dma_device_type": 2 00:09:18.638 }, 00:09:18.638 { 00:09:18.638 "dma_device_id": "system", 00:09:18.638 "dma_device_type": 1 00:09:18.638 }, 00:09:18.638 { 00:09:18.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.638 "dma_device_type": 2 00:09:18.638 } 00:09:18.638 ], 00:09:18.638 "driver_specific": { 00:09:18.638 "raid": { 00:09:18.638 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:18.638 "strip_size_kb": 0, 00:09:18.638 "state": "online", 00:09:18.638 "raid_level": "raid1", 00:09:18.638 "superblock": true, 00:09:18.638 "num_base_bdevs": 3, 00:09:18.638 "num_base_bdevs_discovered": 3, 00:09:18.638 "num_base_bdevs_operational": 3, 00:09:18.638 "base_bdevs_list": [ 00:09:18.638 { 00:09:18.638 "name": "pt1", 00:09:18.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.638 "is_configured": true, 00:09:18.638 "data_offset": 2048, 00:09:18.638 "data_size": 63488 00:09:18.638 }, 00:09:18.638 { 00:09:18.638 "name": "pt2", 00:09:18.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.638 "is_configured": true, 00:09:18.638 "data_offset": 2048, 00:09:18.638 "data_size": 63488 00:09:18.638 }, 00:09:18.638 { 00:09:18.638 "name": "pt3", 00:09:18.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.638 "is_configured": true, 00:09:18.638 "data_offset": 2048, 00:09:18.638 "data_size": 63488 00:09:18.638 } 00:09:18.638 ] 00:09:18.638 } 00:09:18.638 } 00:09:18.638 }' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.638 pt2 00:09:18.638 pt3' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.638 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:18.899 [2024-11-26 12:52:36.330081] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4bc3eae2-6b24-4cf2-8bf3-ab132e34de93 '!=' 4bc3eae2-6b24-4cf2-8bf3-ab132e34de93 ']' 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.899 [2024-11-26 12:52:36.377790] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.899 "name": "raid_bdev1", 00:09:18.899 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:18.899 "strip_size_kb": 0, 00:09:18.899 "state": "online", 00:09:18.899 "raid_level": "raid1", 00:09:18.899 "superblock": true, 00:09:18.899 "num_base_bdevs": 3, 00:09:18.899 "num_base_bdevs_discovered": 2, 00:09:18.899 "num_base_bdevs_operational": 2, 00:09:18.899 "base_bdevs_list": [ 00:09:18.899 { 00:09:18.899 "name": null, 00:09:18.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.899 "is_configured": false, 00:09:18.899 "data_offset": 0, 00:09:18.899 "data_size": 63488 00:09:18.899 }, 00:09:18.899 { 00:09:18.899 "name": "pt2", 00:09:18.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.899 "is_configured": true, 00:09:18.899 "data_offset": 2048, 00:09:18.899 "data_size": 63488 00:09:18.899 }, 00:09:18.899 { 00:09:18.899 "name": "pt3", 00:09:18.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.899 "is_configured": true, 00:09:18.899 "data_offset": 2048, 00:09:18.899 "data_size": 63488 00:09:18.899 } 00:09:18.899 ] 00:09:18.899 }' 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.899 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.470 [2024-11-26 12:52:36.868905] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.470 [2024-11-26 12:52:36.868932] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.470 [2024-11-26 12:52:36.868986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.470 [2024-11-26 12:52:36.869039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.470 [2024-11-26 12:52:36.869048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.470 [2024-11-26 12:52:36.940778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.470 [2024-11-26 12:52:36.940823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.470 [2024-11-26 12:52:36.940840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:19.470 [2024-11-26 12:52:36.940849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.470 [2024-11-26 12:52:36.942921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.470 [2024-11-26 12:52:36.942955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.470 [2024-11-26 12:52:36.943017] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.470 [2024-11-26 12:52:36.943045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.470 pt2 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.470 "name": "raid_bdev1", 00:09:19.470 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:19.470 "strip_size_kb": 0, 00:09:19.470 "state": "configuring", 00:09:19.470 "raid_level": "raid1", 00:09:19.470 "superblock": true, 00:09:19.470 "num_base_bdevs": 3, 00:09:19.470 "num_base_bdevs_discovered": 1, 00:09:19.470 "num_base_bdevs_operational": 2, 00:09:19.470 "base_bdevs_list": [ 00:09:19.470 { 00:09:19.470 "name": null, 00:09:19.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.470 "is_configured": false, 00:09:19.470 "data_offset": 2048, 00:09:19.470 "data_size": 63488 00:09:19.470 }, 00:09:19.470 { 00:09:19.470 "name": "pt2", 00:09:19.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.470 "is_configured": true, 00:09:19.470 "data_offset": 2048, 00:09:19.470 "data_size": 63488 00:09:19.470 }, 00:09:19.470 { 00:09:19.470 "name": null, 00:09:19.470 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.470 "is_configured": false, 00:09:19.470 "data_offset": 2048, 00:09:19.470 "data_size": 63488 00:09:19.470 } 00:09:19.470 ] 00:09:19.470 }' 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.470 12:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.731 [2024-11-26 12:52:37.348143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.731 [2024-11-26 12:52:37.348253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.731 [2024-11-26 12:52:37.348281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:19.731 [2024-11-26 12:52:37.348290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.731 [2024-11-26 12:52:37.348682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.731 [2024-11-26 12:52:37.348698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.731 [2024-11-26 12:52:37.348763] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:19.731 [2024-11-26 12:52:37.348783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.731 [2024-11-26 12:52:37.348866] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:19.731 [2024-11-26 12:52:37.348874] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:19.731 [2024-11-26 12:52:37.349107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.731 [2024-11-26 12:52:37.349235] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:19.731 [2024-11-26 12:52:37.349246] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:19.731 [2024-11-26 12:52:37.349346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.731 pt3 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.731 "name": "raid_bdev1", 00:09:19.731 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:19.731 "strip_size_kb": 0, 00:09:19.731 "state": "online", 00:09:19.731 "raid_level": "raid1", 00:09:19.731 "superblock": true, 00:09:19.731 "num_base_bdevs": 3, 00:09:19.731 "num_base_bdevs_discovered": 2, 00:09:19.731 "num_base_bdevs_operational": 2, 00:09:19.731 "base_bdevs_list": [ 00:09:19.731 { 00:09:19.731 "name": null, 00:09:19.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.731 "is_configured": false, 00:09:19.731 "data_offset": 2048, 00:09:19.731 "data_size": 63488 00:09:19.731 }, 00:09:19.731 { 00:09:19.731 "name": "pt2", 00:09:19.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.731 "is_configured": true, 00:09:19.731 "data_offset": 2048, 00:09:19.731 "data_size": 63488 00:09:19.731 }, 00:09:19.731 { 00:09:19.731 "name": "pt3", 00:09:19.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.731 "is_configured": true, 00:09:19.731 "data_offset": 2048, 00:09:19.731 "data_size": 63488 00:09:19.731 } 00:09:19.731 ] 00:09:19.731 }' 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.731 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.301 [2024-11-26 12:52:37.775374] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.301 [2024-11-26 12:52:37.775439] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.301 [2024-11-26 12:52:37.775518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.301 [2024-11-26 12:52:37.775589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.301 [2024-11-26 12:52:37.775633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.301 [2024-11-26 12:52:37.843288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.301 [2024-11-26 12:52:37.843375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.301 [2024-11-26 12:52:37.843416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:20.301 [2024-11-26 12:52:37.843449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.301 [2024-11-26 12:52:37.845534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.301 [2024-11-26 12:52:37.845601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.301 [2024-11-26 12:52:37.845704] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:20.301 [2024-11-26 12:52:37.845774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.301 [2024-11-26 12:52:37.845899] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:20.301 [2024-11-26 12:52:37.845955] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.301 [2024-11-26 12:52:37.845993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:09:20.301 [2024-11-26 12:52:37.846060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.301 pt1 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:20.301 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.302 "name": "raid_bdev1", 00:09:20.302 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:20.302 "strip_size_kb": 0, 00:09:20.302 "state": "configuring", 00:09:20.302 "raid_level": "raid1", 00:09:20.302 "superblock": true, 00:09:20.302 "num_base_bdevs": 3, 00:09:20.302 "num_base_bdevs_discovered": 1, 00:09:20.302 "num_base_bdevs_operational": 2, 00:09:20.302 "base_bdevs_list": [ 00:09:20.302 { 00:09:20.302 "name": null, 00:09:20.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.302 "is_configured": false, 00:09:20.302 "data_offset": 2048, 00:09:20.302 "data_size": 63488 00:09:20.302 }, 00:09:20.302 { 00:09:20.302 "name": "pt2", 00:09:20.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.302 "is_configured": true, 00:09:20.302 "data_offset": 2048, 00:09:20.302 "data_size": 63488 00:09:20.302 }, 00:09:20.302 { 00:09:20.302 "name": null, 00:09:20.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.302 "is_configured": false, 00:09:20.302 "data_offset": 2048, 00:09:20.302 "data_size": 63488 00:09:20.302 } 00:09:20.302 ] 00:09:20.302 }' 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.302 12:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.872 [2024-11-26 12:52:38.370401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:20.872 [2024-11-26 12:52:38.370453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.872 [2024-11-26 12:52:38.370471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:20.872 [2024-11-26 12:52:38.370482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.872 [2024-11-26 12:52:38.370813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.872 [2024-11-26 12:52:38.370833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:20.872 [2024-11-26 12:52:38.370895] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:20.872 [2024-11-26 12:52:38.370935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:20.872 [2024-11-26 12:52:38.371024] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:20.872 [2024-11-26 12:52:38.371040] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:20.872 [2024-11-26 12:52:38.371291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:20.872 [2024-11-26 12:52:38.371424] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:20.872 [2024-11-26 12:52:38.371433] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:20.872 [2024-11-26 12:52:38.371536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.872 pt3 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.872 "name": "raid_bdev1", 00:09:20.872 "uuid": "4bc3eae2-6b24-4cf2-8bf3-ab132e34de93", 00:09:20.872 "strip_size_kb": 0, 00:09:20.872 "state": "online", 00:09:20.872 "raid_level": "raid1", 00:09:20.872 "superblock": true, 00:09:20.872 "num_base_bdevs": 3, 00:09:20.872 "num_base_bdevs_discovered": 2, 00:09:20.872 "num_base_bdevs_operational": 2, 00:09:20.872 "base_bdevs_list": [ 00:09:20.872 { 00:09:20.872 "name": null, 00:09:20.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.872 "is_configured": false, 00:09:20.872 "data_offset": 2048, 00:09:20.872 "data_size": 63488 00:09:20.872 }, 00:09:20.872 { 00:09:20.872 "name": "pt2", 00:09:20.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.872 "is_configured": true, 00:09:20.872 "data_offset": 2048, 00:09:20.872 "data_size": 63488 00:09:20.872 }, 00:09:20.872 { 00:09:20.872 "name": "pt3", 00:09:20.872 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.872 "is_configured": true, 00:09:20.872 "data_offset": 2048, 00:09:20.872 "data_size": 63488 00:09:20.872 } 00:09:20.872 ] 00:09:20.872 }' 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.872 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.131 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:21.131 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:21.131 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.131 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.131 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.391 [2024-11-26 12:52:38.845815] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4bc3eae2-6b24-4cf2-8bf3-ab132e34de93 '!=' 4bc3eae2-6b24-4cf2-8bf3-ab132e34de93 ']' 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79888 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79888 ']' 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79888 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79888 00:09:21.391 killing process with pid 79888 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79888' 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79888 00:09:21.391 [2024-11-26 12:52:38.901799] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.391 [2024-11-26 12:52:38.901865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.391 [2024-11-26 12:52:38.901921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.391 [2024-11-26 12:52:38.901930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:21.391 12:52:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79888 00:09:21.391 [2024-11-26 12:52:38.934555] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.651 12:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:21.651 00:09:21.651 real 0m6.317s 00:09:21.651 user 0m10.534s 00:09:21.651 sys 0m1.323s 00:09:21.651 12:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.651 12:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.651 ************************************ 00:09:21.651 END TEST raid_superblock_test 00:09:21.651 ************************************ 00:09:21.651 12:52:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:21.651 12:52:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:21.651 12:52:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.651 12:52:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.651 ************************************ 00:09:21.651 START TEST raid_read_error_test 00:09:21.651 ************************************ 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.651 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ot0cBaNnmC 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80318 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80318 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80318 ']' 00:09:21.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.652 12:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.912 [2024-11-26 12:52:39.367728] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:21.912 [2024-11-26 12:52:39.367866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80318 ] 00:09:21.912 [2024-11-26 12:52:39.533543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.912 [2024-11-26 12:52:39.577332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.172 [2024-11-26 12:52:39.619712] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.172 [2024-11-26 12:52:39.619833] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.742 BaseBdev1_malloc 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.742 true 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.742 [2024-11-26 12:52:40.213725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.742 [2024-11-26 12:52:40.213781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.742 [2024-11-26 12:52:40.213817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:22.742 [2024-11-26 12:52:40.213832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.742 [2024-11-26 12:52:40.215944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.742 [2024-11-26 12:52:40.216020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.742 BaseBdev1 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.742 BaseBdev2_malloc 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.742 true 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.742 [2024-11-26 12:52:40.261577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.742 [2024-11-26 12:52:40.261661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.742 [2024-11-26 12:52:40.261698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:22.742 [2024-11-26 12:52:40.261706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.742 [2024-11-26 12:52:40.263784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.742 [2024-11-26 12:52:40.263823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.742 BaseBdev2 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.742 BaseBdev3_malloc 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.742 true 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.742 [2024-11-26 12:52:40.301924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:22.742 [2024-11-26 12:52:40.301966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.742 [2024-11-26 12:52:40.301998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:22.742 [2024-11-26 12:52:40.302007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.742 [2024-11-26 12:52:40.303984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.742 [2024-11-26 12:52:40.304019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:22.742 BaseBdev3 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.742 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.742 [2024-11-26 12:52:40.313964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.742 [2024-11-26 12:52:40.315791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.742 [2024-11-26 12:52:40.315923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.743 [2024-11-26 12:52:40.316121] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:22.743 [2024-11-26 12:52:40.316172] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:22.743 [2024-11-26 12:52:40.316435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:22.743 [2024-11-26 12:52:40.316614] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:22.743 [2024-11-26 12:52:40.316657] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:22.743 [2024-11-26 12:52:40.316803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.743 "name": "raid_bdev1", 00:09:22.743 "uuid": "fa0f22f9-45c8-4c89-869f-604158d3bf66", 00:09:22.743 "strip_size_kb": 0, 00:09:22.743 "state": "online", 00:09:22.743 "raid_level": "raid1", 00:09:22.743 "superblock": true, 00:09:22.743 "num_base_bdevs": 3, 00:09:22.743 "num_base_bdevs_discovered": 3, 00:09:22.743 "num_base_bdevs_operational": 3, 00:09:22.743 "base_bdevs_list": [ 00:09:22.743 { 00:09:22.743 "name": "BaseBdev1", 00:09:22.743 "uuid": "e15ca308-36c3-5f3e-9d35-0dca834954a4", 00:09:22.743 "is_configured": true, 00:09:22.743 "data_offset": 2048, 00:09:22.743 "data_size": 63488 00:09:22.743 }, 00:09:22.743 { 00:09:22.743 "name": "BaseBdev2", 00:09:22.743 "uuid": "cd6505f1-4aab-57a3-88cb-d99e8648d59e", 00:09:22.743 "is_configured": true, 00:09:22.743 "data_offset": 2048, 00:09:22.743 "data_size": 63488 00:09:22.743 }, 00:09:22.743 { 00:09:22.743 "name": "BaseBdev3", 00:09:22.743 "uuid": "820cb86d-8880-5629-b9a2-a027a62e47d2", 00:09:22.743 "is_configured": true, 00:09:22.743 "data_offset": 2048, 00:09:22.743 "data_size": 63488 00:09:22.743 } 00:09:22.743 ] 00:09:22.743 }' 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.743 12:52:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.313 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:23.313 12:52:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:23.313 [2024-11-26 12:52:40.817407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.253 "name": "raid_bdev1", 00:09:24.253 "uuid": "fa0f22f9-45c8-4c89-869f-604158d3bf66", 00:09:24.253 "strip_size_kb": 0, 00:09:24.253 "state": "online", 00:09:24.253 "raid_level": "raid1", 00:09:24.253 "superblock": true, 00:09:24.253 "num_base_bdevs": 3, 00:09:24.253 "num_base_bdevs_discovered": 3, 00:09:24.253 "num_base_bdevs_operational": 3, 00:09:24.253 "base_bdevs_list": [ 00:09:24.253 { 00:09:24.253 "name": "BaseBdev1", 00:09:24.253 "uuid": "e15ca308-36c3-5f3e-9d35-0dca834954a4", 00:09:24.253 "is_configured": true, 00:09:24.253 "data_offset": 2048, 00:09:24.253 "data_size": 63488 00:09:24.253 }, 00:09:24.253 { 00:09:24.253 "name": "BaseBdev2", 00:09:24.253 "uuid": "cd6505f1-4aab-57a3-88cb-d99e8648d59e", 00:09:24.253 "is_configured": true, 00:09:24.253 "data_offset": 2048, 00:09:24.253 "data_size": 63488 00:09:24.253 }, 00:09:24.253 { 00:09:24.253 "name": "BaseBdev3", 00:09:24.253 "uuid": "820cb86d-8880-5629-b9a2-a027a62e47d2", 00:09:24.253 "is_configured": true, 00:09:24.253 "data_offset": 2048, 00:09:24.253 "data_size": 63488 00:09:24.253 } 00:09:24.253 ] 00:09:24.253 }' 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.253 12:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.513 12:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.513 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.513 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.513 [2024-11-26 12:52:42.183984] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.513 [2024-11-26 12:52:42.184098] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.513 [2024-11-26 12:52:42.186656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.513 [2024-11-26 12:52:42.186714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.513 [2024-11-26 12:52:42.186816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.513 [2024-11-26 12:52:42.186829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:24.513 { 00:09:24.513 "results": [ 00:09:24.513 { 00:09:24.513 "job": "raid_bdev1", 00:09:24.513 "core_mask": "0x1", 00:09:24.513 "workload": "randrw", 00:09:24.513 "percentage": 50, 00:09:24.513 "status": "finished", 00:09:24.513 "queue_depth": 1, 00:09:24.513 "io_size": 131072, 00:09:24.513 "runtime": 1.367418, 00:09:24.513 "iops": 15102.916591707875, 00:09:24.513 "mibps": 1887.8645739634844, 00:09:24.513 "io_failed": 0, 00:09:24.513 "io_timeout": 0, 00:09:24.513 "avg_latency_us": 63.80243435191786, 00:09:24.513 "min_latency_us": 21.463755458515283, 00:09:24.513 "max_latency_us": 1337.907423580786 00:09:24.513 } 00:09:24.513 ], 00:09:24.513 "core_count": 1 00:09:24.513 } 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80318 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80318 ']' 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80318 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80318 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:24.773 killing process with pid 80318 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80318' 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80318 00:09:24.773 [2024-11-26 12:52:42.226716] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.773 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80318 00:09:24.773 [2024-11-26 12:52:42.251527] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.033 12:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ot0cBaNnmC 00:09:25.033 12:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:25.033 12:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:25.033 12:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:25.033 12:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:25.033 12:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.033 12:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:25.033 12:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:25.033 00:09:25.033 real 0m3.240s 00:09:25.033 user 0m4.035s 00:09:25.033 sys 0m0.549s 00:09:25.033 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.033 12:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.034 ************************************ 00:09:25.034 END TEST raid_read_error_test 00:09:25.034 ************************************ 00:09:25.034 12:52:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:25.034 12:52:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:25.034 12:52:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.034 12:52:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.034 ************************************ 00:09:25.034 START TEST raid_write_error_test 00:09:25.034 ************************************ 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TqreasQpVR 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80447 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80447 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80447 ']' 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.034 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.034 [2024-11-26 12:52:42.672069] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:25.034 [2024-11-26 12:52:42.672272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80447 ] 00:09:25.294 [2024-11-26 12:52:42.831141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.294 [2024-11-26 12:52:42.875277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.294 [2024-11-26 12:52:42.917509] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.294 [2024-11-26 12:52:42.917554] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.863 BaseBdev1_malloc 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.863 true 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.863 [2024-11-26 12:52:43.531520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:25.863 [2024-11-26 12:52:43.531579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.863 [2024-11-26 12:52:43.531599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:25.863 [2024-11-26 12:52:43.531608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.863 [2024-11-26 12:52:43.533679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.863 [2024-11-26 12:52:43.533720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:25.863 BaseBdev1 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.863 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.123 BaseBdev2_malloc 00:09:26.123 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.123 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:26.123 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.123 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.123 true 00:09:26.123 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.123 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.124 [2024-11-26 12:52:43.586436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:26.124 [2024-11-26 12:52:43.586493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.124 [2024-11-26 12:52:43.586514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:26.124 [2024-11-26 12:52:43.586525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.124 [2024-11-26 12:52:43.589143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.124 [2024-11-26 12:52:43.589197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:26.124 BaseBdev2 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.124 BaseBdev3_malloc 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.124 true 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.124 [2024-11-26 12:52:43.626818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:26.124 [2024-11-26 12:52:43.626860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.124 [2024-11-26 12:52:43.626892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:26.124 [2024-11-26 12:52:43.626900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.124 [2024-11-26 12:52:43.628885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.124 [2024-11-26 12:52:43.628920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:26.124 BaseBdev3 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.124 [2024-11-26 12:52:43.638846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.124 [2024-11-26 12:52:43.640619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.124 [2024-11-26 12:52:43.640697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.124 [2024-11-26 12:52:43.640865] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:26.124 [2024-11-26 12:52:43.640884] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:26.124 [2024-11-26 12:52:43.641117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:26.124 [2024-11-26 12:52:43.641292] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:26.124 [2024-11-26 12:52:43.641304] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:26.124 [2024-11-26 12:52:43.641434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.124 "name": "raid_bdev1", 00:09:26.124 "uuid": "334b4103-562d-4efa-a2cd-cf9dc709bbb9", 00:09:26.124 "strip_size_kb": 0, 00:09:26.124 "state": "online", 00:09:26.124 "raid_level": "raid1", 00:09:26.124 "superblock": true, 00:09:26.124 "num_base_bdevs": 3, 00:09:26.124 "num_base_bdevs_discovered": 3, 00:09:26.124 "num_base_bdevs_operational": 3, 00:09:26.124 "base_bdevs_list": [ 00:09:26.124 { 00:09:26.124 "name": "BaseBdev1", 00:09:26.124 "uuid": "d6b4c43c-89a7-5919-b10a-de7ef3f1675c", 00:09:26.124 "is_configured": true, 00:09:26.124 "data_offset": 2048, 00:09:26.124 "data_size": 63488 00:09:26.124 }, 00:09:26.124 { 00:09:26.124 "name": "BaseBdev2", 00:09:26.124 "uuid": "fced236b-e799-5e36-a54b-eaf3256109b6", 00:09:26.124 "is_configured": true, 00:09:26.124 "data_offset": 2048, 00:09:26.124 "data_size": 63488 00:09:26.124 }, 00:09:26.124 { 00:09:26.124 "name": "BaseBdev3", 00:09:26.124 "uuid": "74e92d61-5762-580c-b4ea-6e3f6df4b79f", 00:09:26.124 "is_configured": true, 00:09:26.124 "data_offset": 2048, 00:09:26.124 "data_size": 63488 00:09:26.124 } 00:09:26.124 ] 00:09:26.124 }' 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.124 12:52:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.695 12:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:26.695 12:52:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:26.695 [2024-11-26 12:52:44.158281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.636 [2024-11-26 12:52:45.073774] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:27.636 [2024-11-26 12:52:45.073900] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.636 [2024-11-26 12:52:45.074148] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.636 "name": "raid_bdev1", 00:09:27.636 "uuid": "334b4103-562d-4efa-a2cd-cf9dc709bbb9", 00:09:27.636 "strip_size_kb": 0, 00:09:27.636 "state": "online", 00:09:27.636 "raid_level": "raid1", 00:09:27.636 "superblock": true, 00:09:27.636 "num_base_bdevs": 3, 00:09:27.636 "num_base_bdevs_discovered": 2, 00:09:27.636 "num_base_bdevs_operational": 2, 00:09:27.636 "base_bdevs_list": [ 00:09:27.636 { 00:09:27.636 "name": null, 00:09:27.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.636 "is_configured": false, 00:09:27.636 "data_offset": 0, 00:09:27.636 "data_size": 63488 00:09:27.636 }, 00:09:27.636 { 00:09:27.636 "name": "BaseBdev2", 00:09:27.636 "uuid": "fced236b-e799-5e36-a54b-eaf3256109b6", 00:09:27.636 "is_configured": true, 00:09:27.636 "data_offset": 2048, 00:09:27.636 "data_size": 63488 00:09:27.636 }, 00:09:27.636 { 00:09:27.636 "name": "BaseBdev3", 00:09:27.636 "uuid": "74e92d61-5762-580c-b4ea-6e3f6df4b79f", 00:09:27.636 "is_configured": true, 00:09:27.636 "data_offset": 2048, 00:09:27.636 "data_size": 63488 00:09:27.636 } 00:09:27.636 ] 00:09:27.636 }' 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.636 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.896 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.896 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.896 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.896 [2024-11-26 12:52:45.531823] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.896 [2024-11-26 12:52:45.531917] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.896 [2024-11-26 12:52:45.534309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.896 [2024-11-26 12:52:45.534351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.896 [2024-11-26 12:52:45.534429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.896 [2024-11-26 12:52:45.534439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:27.896 { 00:09:27.896 "results": [ 00:09:27.896 { 00:09:27.896 "job": "raid_bdev1", 00:09:27.896 "core_mask": "0x1", 00:09:27.897 "workload": "randrw", 00:09:27.897 "percentage": 50, 00:09:27.897 "status": "finished", 00:09:27.897 "queue_depth": 1, 00:09:27.897 "io_size": 131072, 00:09:27.897 "runtime": 1.374408, 00:09:27.897 "iops": 16949.843132461396, 00:09:27.897 "mibps": 2118.7303915576745, 00:09:27.897 "io_failed": 0, 00:09:27.897 "io_timeout": 0, 00:09:27.897 "avg_latency_us": 56.575452276980656, 00:09:27.897 "min_latency_us": 21.463755458515283, 00:09:27.897 "max_latency_us": 1516.7720524017468 00:09:27.897 } 00:09:27.897 ], 00:09:27.897 "core_count": 1 00:09:27.897 } 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80447 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80447 ']' 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80447 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80447 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.897 killing process with pid 80447 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80447' 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80447 00:09:27.897 [2024-11-26 12:52:45.568597] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.897 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80447 00:09:28.157 [2024-11-26 12:52:45.593963] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.157 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TqreasQpVR 00:09:28.157 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:28.157 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:28.419 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:28.419 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:28.419 ************************************ 00:09:28.419 END TEST raid_write_error_test 00:09:28.419 ************************************ 00:09:28.419 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.419 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:28.419 12:52:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:28.419 00:09:28.419 real 0m3.268s 00:09:28.419 user 0m4.080s 00:09:28.419 sys 0m0.562s 00:09:28.419 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.419 12:52:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.419 12:52:45 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:28.419 12:52:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:28.419 12:52:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:28.419 12:52:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:28.419 12:52:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.419 12:52:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.419 ************************************ 00:09:28.419 START TEST raid_state_function_test 00:09:28.419 ************************************ 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80584 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80584' 00:09:28.419 Process raid pid: 80584 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80584 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80584 ']' 00:09:28.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.419 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.419 [2024-11-26 12:52:46.006540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:28.419 [2024-11-26 12:52:46.006737] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.679 [2024-11-26 12:52:46.167610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.679 [2024-11-26 12:52:46.212213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.679 [2024-11-26 12:52:46.254391] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.679 [2024-11-26 12:52:46.254420] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.249 [2024-11-26 12:52:46.831778] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.249 [2024-11-26 12:52:46.831891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.249 [2024-11-26 12:52:46.831917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.249 [2024-11-26 12:52:46.831928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.249 [2024-11-26 12:52:46.831934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.249 [2024-11-26 12:52:46.831945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.249 [2024-11-26 12:52:46.831951] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:29.249 [2024-11-26 12:52:46.831961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.249 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.249 "name": "Existed_Raid", 00:09:29.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.249 "strip_size_kb": 64, 00:09:29.249 "state": "configuring", 00:09:29.249 "raid_level": "raid0", 00:09:29.249 "superblock": false, 00:09:29.249 "num_base_bdevs": 4, 00:09:29.249 "num_base_bdevs_discovered": 0, 00:09:29.249 "num_base_bdevs_operational": 4, 00:09:29.249 "base_bdevs_list": [ 00:09:29.249 { 00:09:29.249 "name": "BaseBdev1", 00:09:29.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.249 "is_configured": false, 00:09:29.249 "data_offset": 0, 00:09:29.249 "data_size": 0 00:09:29.249 }, 00:09:29.249 { 00:09:29.249 "name": "BaseBdev2", 00:09:29.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.249 "is_configured": false, 00:09:29.249 "data_offset": 0, 00:09:29.249 "data_size": 0 00:09:29.249 }, 00:09:29.249 { 00:09:29.249 "name": "BaseBdev3", 00:09:29.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.250 "is_configured": false, 00:09:29.250 "data_offset": 0, 00:09:29.250 "data_size": 0 00:09:29.250 }, 00:09:29.250 { 00:09:29.250 "name": "BaseBdev4", 00:09:29.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.250 "is_configured": false, 00:09:29.250 "data_offset": 0, 00:09:29.250 "data_size": 0 00:09:29.250 } 00:09:29.250 ] 00:09:29.250 }' 00:09:29.250 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.250 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.821 [2024-11-26 12:52:47.242971] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.821 [2024-11-26 12:52:47.243007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.821 [2024-11-26 12:52:47.254994] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.821 [2024-11-26 12:52:47.255033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.821 [2024-11-26 12:52:47.255041] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.821 [2024-11-26 12:52:47.255049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.821 [2024-11-26 12:52:47.255055] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.821 [2024-11-26 12:52:47.255064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.821 [2024-11-26 12:52:47.255069] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:29.821 [2024-11-26 12:52:47.255078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.821 [2024-11-26 12:52:47.275786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.821 BaseBdev1 00:09:29.821 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.822 [ 00:09:29.822 { 00:09:29.822 "name": "BaseBdev1", 00:09:29.822 "aliases": [ 00:09:29.822 "ec764c63-a7bc-4f73-b516-b35046f30402" 00:09:29.822 ], 00:09:29.822 "product_name": "Malloc disk", 00:09:29.822 "block_size": 512, 00:09:29.822 "num_blocks": 65536, 00:09:29.822 "uuid": "ec764c63-a7bc-4f73-b516-b35046f30402", 00:09:29.822 "assigned_rate_limits": { 00:09:29.822 "rw_ios_per_sec": 0, 00:09:29.822 "rw_mbytes_per_sec": 0, 00:09:29.822 "r_mbytes_per_sec": 0, 00:09:29.822 "w_mbytes_per_sec": 0 00:09:29.822 }, 00:09:29.822 "claimed": true, 00:09:29.822 "claim_type": "exclusive_write", 00:09:29.822 "zoned": false, 00:09:29.822 "supported_io_types": { 00:09:29.822 "read": true, 00:09:29.822 "write": true, 00:09:29.822 "unmap": true, 00:09:29.822 "flush": true, 00:09:29.822 "reset": true, 00:09:29.822 "nvme_admin": false, 00:09:29.822 "nvme_io": false, 00:09:29.822 "nvme_io_md": false, 00:09:29.822 "write_zeroes": true, 00:09:29.822 "zcopy": true, 00:09:29.822 "get_zone_info": false, 00:09:29.822 "zone_management": false, 00:09:29.822 "zone_append": false, 00:09:29.822 "compare": false, 00:09:29.822 "compare_and_write": false, 00:09:29.822 "abort": true, 00:09:29.822 "seek_hole": false, 00:09:29.822 "seek_data": false, 00:09:29.822 "copy": true, 00:09:29.822 "nvme_iov_md": false 00:09:29.822 }, 00:09:29.822 "memory_domains": [ 00:09:29.822 { 00:09:29.822 "dma_device_id": "system", 00:09:29.822 "dma_device_type": 1 00:09:29.822 }, 00:09:29.822 { 00:09:29.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.822 "dma_device_type": 2 00:09:29.822 } 00:09:29.822 ], 00:09:29.822 "driver_specific": {} 00:09:29.822 } 00:09:29.822 ] 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.822 "name": "Existed_Raid", 00:09:29.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.822 "strip_size_kb": 64, 00:09:29.822 "state": "configuring", 00:09:29.822 "raid_level": "raid0", 00:09:29.822 "superblock": false, 00:09:29.822 "num_base_bdevs": 4, 00:09:29.822 "num_base_bdevs_discovered": 1, 00:09:29.822 "num_base_bdevs_operational": 4, 00:09:29.822 "base_bdevs_list": [ 00:09:29.822 { 00:09:29.822 "name": "BaseBdev1", 00:09:29.822 "uuid": "ec764c63-a7bc-4f73-b516-b35046f30402", 00:09:29.822 "is_configured": true, 00:09:29.822 "data_offset": 0, 00:09:29.822 "data_size": 65536 00:09:29.822 }, 00:09:29.822 { 00:09:29.822 "name": "BaseBdev2", 00:09:29.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.822 "is_configured": false, 00:09:29.822 "data_offset": 0, 00:09:29.822 "data_size": 0 00:09:29.822 }, 00:09:29.822 { 00:09:29.822 "name": "BaseBdev3", 00:09:29.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.822 "is_configured": false, 00:09:29.822 "data_offset": 0, 00:09:29.822 "data_size": 0 00:09:29.822 }, 00:09:29.822 { 00:09:29.822 "name": "BaseBdev4", 00:09:29.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.822 "is_configured": false, 00:09:29.822 "data_offset": 0, 00:09:29.822 "data_size": 0 00:09:29.822 } 00:09:29.822 ] 00:09:29.822 }' 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.822 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.083 [2024-11-26 12:52:47.703160] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.083 [2024-11-26 12:52:47.703281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.083 [2024-11-26 12:52:47.711204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.083 [2024-11-26 12:52:47.713004] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.083 [2024-11-26 12:52:47.713090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.083 [2024-11-26 12:52:47.713117] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.083 [2024-11-26 12:52:47.713146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.083 [2024-11-26 12:52:47.713164] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:30.083 [2024-11-26 12:52:47.713184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.083 "name": "Existed_Raid", 00:09:30.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.083 "strip_size_kb": 64, 00:09:30.083 "state": "configuring", 00:09:30.083 "raid_level": "raid0", 00:09:30.083 "superblock": false, 00:09:30.083 "num_base_bdevs": 4, 00:09:30.083 "num_base_bdevs_discovered": 1, 00:09:30.083 "num_base_bdevs_operational": 4, 00:09:30.083 "base_bdevs_list": [ 00:09:30.083 { 00:09:30.083 "name": "BaseBdev1", 00:09:30.083 "uuid": "ec764c63-a7bc-4f73-b516-b35046f30402", 00:09:30.083 "is_configured": true, 00:09:30.083 "data_offset": 0, 00:09:30.083 "data_size": 65536 00:09:30.083 }, 00:09:30.083 { 00:09:30.083 "name": "BaseBdev2", 00:09:30.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.083 "is_configured": false, 00:09:30.083 "data_offset": 0, 00:09:30.083 "data_size": 0 00:09:30.083 }, 00:09:30.083 { 00:09:30.083 "name": "BaseBdev3", 00:09:30.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.083 "is_configured": false, 00:09:30.083 "data_offset": 0, 00:09:30.083 "data_size": 0 00:09:30.083 }, 00:09:30.083 { 00:09:30.083 "name": "BaseBdev4", 00:09:30.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.083 "is_configured": false, 00:09:30.083 "data_offset": 0, 00:09:30.083 "data_size": 0 00:09:30.083 } 00:09:30.083 ] 00:09:30.083 }' 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.083 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.653 [2024-11-26 12:52:48.175025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.653 BaseBdev2 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.653 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.654 [ 00:09:30.654 { 00:09:30.654 "name": "BaseBdev2", 00:09:30.654 "aliases": [ 00:09:30.654 "ea575c8a-dbd3-4dfe-9ffc-7ff7d3c9ac24" 00:09:30.654 ], 00:09:30.654 "product_name": "Malloc disk", 00:09:30.654 "block_size": 512, 00:09:30.654 "num_blocks": 65536, 00:09:30.654 "uuid": "ea575c8a-dbd3-4dfe-9ffc-7ff7d3c9ac24", 00:09:30.654 "assigned_rate_limits": { 00:09:30.654 "rw_ios_per_sec": 0, 00:09:30.654 "rw_mbytes_per_sec": 0, 00:09:30.654 "r_mbytes_per_sec": 0, 00:09:30.654 "w_mbytes_per_sec": 0 00:09:30.654 }, 00:09:30.654 "claimed": true, 00:09:30.654 "claim_type": "exclusive_write", 00:09:30.654 "zoned": false, 00:09:30.654 "supported_io_types": { 00:09:30.654 "read": true, 00:09:30.654 "write": true, 00:09:30.654 "unmap": true, 00:09:30.654 "flush": true, 00:09:30.654 "reset": true, 00:09:30.654 "nvme_admin": false, 00:09:30.654 "nvme_io": false, 00:09:30.654 "nvme_io_md": false, 00:09:30.654 "write_zeroes": true, 00:09:30.654 "zcopy": true, 00:09:30.654 "get_zone_info": false, 00:09:30.654 "zone_management": false, 00:09:30.654 "zone_append": false, 00:09:30.654 "compare": false, 00:09:30.654 "compare_and_write": false, 00:09:30.654 "abort": true, 00:09:30.654 "seek_hole": false, 00:09:30.654 "seek_data": false, 00:09:30.654 "copy": true, 00:09:30.654 "nvme_iov_md": false 00:09:30.654 }, 00:09:30.654 "memory_domains": [ 00:09:30.654 { 00:09:30.654 "dma_device_id": "system", 00:09:30.654 "dma_device_type": 1 00:09:30.654 }, 00:09:30.654 { 00:09:30.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.654 "dma_device_type": 2 00:09:30.654 } 00:09:30.654 ], 00:09:30.654 "driver_specific": {} 00:09:30.654 } 00:09:30.654 ] 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.654 "name": "Existed_Raid", 00:09:30.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.654 "strip_size_kb": 64, 00:09:30.654 "state": "configuring", 00:09:30.654 "raid_level": "raid0", 00:09:30.654 "superblock": false, 00:09:30.654 "num_base_bdevs": 4, 00:09:30.654 "num_base_bdevs_discovered": 2, 00:09:30.654 "num_base_bdevs_operational": 4, 00:09:30.654 "base_bdevs_list": [ 00:09:30.654 { 00:09:30.654 "name": "BaseBdev1", 00:09:30.654 "uuid": "ec764c63-a7bc-4f73-b516-b35046f30402", 00:09:30.654 "is_configured": true, 00:09:30.654 "data_offset": 0, 00:09:30.654 "data_size": 65536 00:09:30.654 }, 00:09:30.654 { 00:09:30.654 "name": "BaseBdev2", 00:09:30.654 "uuid": "ea575c8a-dbd3-4dfe-9ffc-7ff7d3c9ac24", 00:09:30.654 "is_configured": true, 00:09:30.654 "data_offset": 0, 00:09:30.654 "data_size": 65536 00:09:30.654 }, 00:09:30.654 { 00:09:30.654 "name": "BaseBdev3", 00:09:30.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.654 "is_configured": false, 00:09:30.654 "data_offset": 0, 00:09:30.654 "data_size": 0 00:09:30.654 }, 00:09:30.654 { 00:09:30.654 "name": "BaseBdev4", 00:09:30.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.654 "is_configured": false, 00:09:30.654 "data_offset": 0, 00:09:30.654 "data_size": 0 00:09:30.654 } 00:09:30.654 ] 00:09:30.654 }' 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.654 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.224 [2024-11-26 12:52:48.669168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.224 BaseBdev3 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.224 [ 00:09:31.224 { 00:09:31.224 "name": "BaseBdev3", 00:09:31.224 "aliases": [ 00:09:31.224 "d26be723-a034-4d39-90f9-d2d894f86dff" 00:09:31.224 ], 00:09:31.224 "product_name": "Malloc disk", 00:09:31.224 "block_size": 512, 00:09:31.224 "num_blocks": 65536, 00:09:31.224 "uuid": "d26be723-a034-4d39-90f9-d2d894f86dff", 00:09:31.224 "assigned_rate_limits": { 00:09:31.224 "rw_ios_per_sec": 0, 00:09:31.224 "rw_mbytes_per_sec": 0, 00:09:31.224 "r_mbytes_per_sec": 0, 00:09:31.224 "w_mbytes_per_sec": 0 00:09:31.224 }, 00:09:31.224 "claimed": true, 00:09:31.224 "claim_type": "exclusive_write", 00:09:31.224 "zoned": false, 00:09:31.224 "supported_io_types": { 00:09:31.224 "read": true, 00:09:31.224 "write": true, 00:09:31.224 "unmap": true, 00:09:31.224 "flush": true, 00:09:31.224 "reset": true, 00:09:31.224 "nvme_admin": false, 00:09:31.224 "nvme_io": false, 00:09:31.224 "nvme_io_md": false, 00:09:31.224 "write_zeroes": true, 00:09:31.224 "zcopy": true, 00:09:31.224 "get_zone_info": false, 00:09:31.224 "zone_management": false, 00:09:31.224 "zone_append": false, 00:09:31.224 "compare": false, 00:09:31.224 "compare_and_write": false, 00:09:31.224 "abort": true, 00:09:31.224 "seek_hole": false, 00:09:31.224 "seek_data": false, 00:09:31.224 "copy": true, 00:09:31.224 "nvme_iov_md": false 00:09:31.224 }, 00:09:31.224 "memory_domains": [ 00:09:31.224 { 00:09:31.224 "dma_device_id": "system", 00:09:31.224 "dma_device_type": 1 00:09:31.224 }, 00:09:31.224 { 00:09:31.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.224 "dma_device_type": 2 00:09:31.224 } 00:09:31.224 ], 00:09:31.224 "driver_specific": {} 00:09:31.224 } 00:09:31.224 ] 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.224 "name": "Existed_Raid", 00:09:31.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.224 "strip_size_kb": 64, 00:09:31.224 "state": "configuring", 00:09:31.224 "raid_level": "raid0", 00:09:31.224 "superblock": false, 00:09:31.224 "num_base_bdevs": 4, 00:09:31.224 "num_base_bdevs_discovered": 3, 00:09:31.224 "num_base_bdevs_operational": 4, 00:09:31.224 "base_bdevs_list": [ 00:09:31.224 { 00:09:31.224 "name": "BaseBdev1", 00:09:31.224 "uuid": "ec764c63-a7bc-4f73-b516-b35046f30402", 00:09:31.224 "is_configured": true, 00:09:31.224 "data_offset": 0, 00:09:31.224 "data_size": 65536 00:09:31.224 }, 00:09:31.224 { 00:09:31.224 "name": "BaseBdev2", 00:09:31.224 "uuid": "ea575c8a-dbd3-4dfe-9ffc-7ff7d3c9ac24", 00:09:31.224 "is_configured": true, 00:09:31.224 "data_offset": 0, 00:09:31.224 "data_size": 65536 00:09:31.224 }, 00:09:31.224 { 00:09:31.224 "name": "BaseBdev3", 00:09:31.224 "uuid": "d26be723-a034-4d39-90f9-d2d894f86dff", 00:09:31.224 "is_configured": true, 00:09:31.224 "data_offset": 0, 00:09:31.224 "data_size": 65536 00:09:31.224 }, 00:09:31.224 { 00:09:31.224 "name": "BaseBdev4", 00:09:31.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.224 "is_configured": false, 00:09:31.224 "data_offset": 0, 00:09:31.224 "data_size": 0 00:09:31.224 } 00:09:31.224 ] 00:09:31.224 }' 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.224 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.483 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:31.483 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.483 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.483 [2024-11-26 12:52:49.119465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:31.483 [2024-11-26 12:52:49.119562] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:31.483 [2024-11-26 12:52:49.119611] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:31.483 [2024-11-26 12:52:49.119951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:31.483 [2024-11-26 12:52:49.120128] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:31.483 [2024-11-26 12:52:49.120186] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:31.483 [2024-11-26 12:52:49.120435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.483 BaseBdev4 00:09:31.483 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.483 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:31.483 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:31.483 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.483 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:31.483 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.483 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.484 [ 00:09:31.484 { 00:09:31.484 "name": "BaseBdev4", 00:09:31.484 "aliases": [ 00:09:31.484 "2a0cf5b2-c55b-4beb-83ef-8ee30afb9354" 00:09:31.484 ], 00:09:31.484 "product_name": "Malloc disk", 00:09:31.484 "block_size": 512, 00:09:31.484 "num_blocks": 65536, 00:09:31.484 "uuid": "2a0cf5b2-c55b-4beb-83ef-8ee30afb9354", 00:09:31.484 "assigned_rate_limits": { 00:09:31.484 "rw_ios_per_sec": 0, 00:09:31.484 "rw_mbytes_per_sec": 0, 00:09:31.484 "r_mbytes_per_sec": 0, 00:09:31.484 "w_mbytes_per_sec": 0 00:09:31.484 }, 00:09:31.484 "claimed": true, 00:09:31.484 "claim_type": "exclusive_write", 00:09:31.484 "zoned": false, 00:09:31.484 "supported_io_types": { 00:09:31.484 "read": true, 00:09:31.484 "write": true, 00:09:31.484 "unmap": true, 00:09:31.484 "flush": true, 00:09:31.484 "reset": true, 00:09:31.484 "nvme_admin": false, 00:09:31.484 "nvme_io": false, 00:09:31.484 "nvme_io_md": false, 00:09:31.484 "write_zeroes": true, 00:09:31.484 "zcopy": true, 00:09:31.484 "get_zone_info": false, 00:09:31.484 "zone_management": false, 00:09:31.484 "zone_append": false, 00:09:31.484 "compare": false, 00:09:31.484 "compare_and_write": false, 00:09:31.484 "abort": true, 00:09:31.484 "seek_hole": false, 00:09:31.484 "seek_data": false, 00:09:31.484 "copy": true, 00:09:31.484 "nvme_iov_md": false 00:09:31.484 }, 00:09:31.484 "memory_domains": [ 00:09:31.484 { 00:09:31.484 "dma_device_id": "system", 00:09:31.484 "dma_device_type": 1 00:09:31.484 }, 00:09:31.484 { 00:09:31.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.484 "dma_device_type": 2 00:09:31.484 } 00:09:31.484 ], 00:09:31.484 "driver_specific": {} 00:09:31.484 } 00:09:31.484 ] 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.484 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.749 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.749 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.749 "name": "Existed_Raid", 00:09:31.749 "uuid": "38cfe57f-45da-4059-a213-2775219d57f6", 00:09:31.749 "strip_size_kb": 64, 00:09:31.749 "state": "online", 00:09:31.749 "raid_level": "raid0", 00:09:31.749 "superblock": false, 00:09:31.749 "num_base_bdevs": 4, 00:09:31.749 "num_base_bdevs_discovered": 4, 00:09:31.749 "num_base_bdevs_operational": 4, 00:09:31.749 "base_bdevs_list": [ 00:09:31.749 { 00:09:31.749 "name": "BaseBdev1", 00:09:31.749 "uuid": "ec764c63-a7bc-4f73-b516-b35046f30402", 00:09:31.749 "is_configured": true, 00:09:31.749 "data_offset": 0, 00:09:31.749 "data_size": 65536 00:09:31.749 }, 00:09:31.749 { 00:09:31.749 "name": "BaseBdev2", 00:09:31.749 "uuid": "ea575c8a-dbd3-4dfe-9ffc-7ff7d3c9ac24", 00:09:31.749 "is_configured": true, 00:09:31.749 "data_offset": 0, 00:09:31.749 "data_size": 65536 00:09:31.749 }, 00:09:31.749 { 00:09:31.749 "name": "BaseBdev3", 00:09:31.749 "uuid": "d26be723-a034-4d39-90f9-d2d894f86dff", 00:09:31.749 "is_configured": true, 00:09:31.749 "data_offset": 0, 00:09:31.749 "data_size": 65536 00:09:31.749 }, 00:09:31.749 { 00:09:31.749 "name": "BaseBdev4", 00:09:31.749 "uuid": "2a0cf5b2-c55b-4beb-83ef-8ee30afb9354", 00:09:31.749 "is_configured": true, 00:09:31.749 "data_offset": 0, 00:09:31.749 "data_size": 65536 00:09:31.749 } 00:09:31.749 ] 00:09:31.749 }' 00:09:31.749 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.749 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.050 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.051 [2024-11-26 12:52:49.563089] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.051 "name": "Existed_Raid", 00:09:32.051 "aliases": [ 00:09:32.051 "38cfe57f-45da-4059-a213-2775219d57f6" 00:09:32.051 ], 00:09:32.051 "product_name": "Raid Volume", 00:09:32.051 "block_size": 512, 00:09:32.051 "num_blocks": 262144, 00:09:32.051 "uuid": "38cfe57f-45da-4059-a213-2775219d57f6", 00:09:32.051 "assigned_rate_limits": { 00:09:32.051 "rw_ios_per_sec": 0, 00:09:32.051 "rw_mbytes_per_sec": 0, 00:09:32.051 "r_mbytes_per_sec": 0, 00:09:32.051 "w_mbytes_per_sec": 0 00:09:32.051 }, 00:09:32.051 "claimed": false, 00:09:32.051 "zoned": false, 00:09:32.051 "supported_io_types": { 00:09:32.051 "read": true, 00:09:32.051 "write": true, 00:09:32.051 "unmap": true, 00:09:32.051 "flush": true, 00:09:32.051 "reset": true, 00:09:32.051 "nvme_admin": false, 00:09:32.051 "nvme_io": false, 00:09:32.051 "nvme_io_md": false, 00:09:32.051 "write_zeroes": true, 00:09:32.051 "zcopy": false, 00:09:32.051 "get_zone_info": false, 00:09:32.051 "zone_management": false, 00:09:32.051 "zone_append": false, 00:09:32.051 "compare": false, 00:09:32.051 "compare_and_write": false, 00:09:32.051 "abort": false, 00:09:32.051 "seek_hole": false, 00:09:32.051 "seek_data": false, 00:09:32.051 "copy": false, 00:09:32.051 "nvme_iov_md": false 00:09:32.051 }, 00:09:32.051 "memory_domains": [ 00:09:32.051 { 00:09:32.051 "dma_device_id": "system", 00:09:32.051 "dma_device_type": 1 00:09:32.051 }, 00:09:32.051 { 00:09:32.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.051 "dma_device_type": 2 00:09:32.051 }, 00:09:32.051 { 00:09:32.051 "dma_device_id": "system", 00:09:32.051 "dma_device_type": 1 00:09:32.051 }, 00:09:32.051 { 00:09:32.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.051 "dma_device_type": 2 00:09:32.051 }, 00:09:32.051 { 00:09:32.051 "dma_device_id": "system", 00:09:32.051 "dma_device_type": 1 00:09:32.051 }, 00:09:32.051 { 00:09:32.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.051 "dma_device_type": 2 00:09:32.051 }, 00:09:32.051 { 00:09:32.051 "dma_device_id": "system", 00:09:32.051 "dma_device_type": 1 00:09:32.051 }, 00:09:32.051 { 00:09:32.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.051 "dma_device_type": 2 00:09:32.051 } 00:09:32.051 ], 00:09:32.051 "driver_specific": { 00:09:32.051 "raid": { 00:09:32.051 "uuid": "38cfe57f-45da-4059-a213-2775219d57f6", 00:09:32.051 "strip_size_kb": 64, 00:09:32.051 "state": "online", 00:09:32.051 "raid_level": "raid0", 00:09:32.051 "superblock": false, 00:09:32.051 "num_base_bdevs": 4, 00:09:32.051 "num_base_bdevs_discovered": 4, 00:09:32.051 "num_base_bdevs_operational": 4, 00:09:32.051 "base_bdevs_list": [ 00:09:32.051 { 00:09:32.051 "name": "BaseBdev1", 00:09:32.051 "uuid": "ec764c63-a7bc-4f73-b516-b35046f30402", 00:09:32.051 "is_configured": true, 00:09:32.051 "data_offset": 0, 00:09:32.051 "data_size": 65536 00:09:32.051 }, 00:09:32.051 { 00:09:32.051 "name": "BaseBdev2", 00:09:32.051 "uuid": "ea575c8a-dbd3-4dfe-9ffc-7ff7d3c9ac24", 00:09:32.051 "is_configured": true, 00:09:32.051 "data_offset": 0, 00:09:32.051 "data_size": 65536 00:09:32.051 }, 00:09:32.051 { 00:09:32.051 "name": "BaseBdev3", 00:09:32.051 "uuid": "d26be723-a034-4d39-90f9-d2d894f86dff", 00:09:32.051 "is_configured": true, 00:09:32.051 "data_offset": 0, 00:09:32.051 "data_size": 65536 00:09:32.051 }, 00:09:32.051 { 00:09:32.051 "name": "BaseBdev4", 00:09:32.051 "uuid": "2a0cf5b2-c55b-4beb-83ef-8ee30afb9354", 00:09:32.051 "is_configured": true, 00:09:32.051 "data_offset": 0, 00:09:32.051 "data_size": 65536 00:09:32.051 } 00:09:32.051 ] 00:09:32.051 } 00:09:32.051 } 00:09:32.051 }' 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:32.051 BaseBdev2 00:09:32.051 BaseBdev3 00:09:32.051 BaseBdev4' 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.051 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.315 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.316 [2024-11-26 12:52:49.826349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:32.316 [2024-11-26 12:52:49.826379] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.316 [2024-11-26 12:52:49.826439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.316 "name": "Existed_Raid", 00:09:32.316 "uuid": "38cfe57f-45da-4059-a213-2775219d57f6", 00:09:32.316 "strip_size_kb": 64, 00:09:32.316 "state": "offline", 00:09:32.316 "raid_level": "raid0", 00:09:32.316 "superblock": false, 00:09:32.316 "num_base_bdevs": 4, 00:09:32.316 "num_base_bdevs_discovered": 3, 00:09:32.316 "num_base_bdevs_operational": 3, 00:09:32.316 "base_bdevs_list": [ 00:09:32.316 { 00:09:32.316 "name": null, 00:09:32.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.316 "is_configured": false, 00:09:32.316 "data_offset": 0, 00:09:32.316 "data_size": 65536 00:09:32.316 }, 00:09:32.316 { 00:09:32.316 "name": "BaseBdev2", 00:09:32.316 "uuid": "ea575c8a-dbd3-4dfe-9ffc-7ff7d3c9ac24", 00:09:32.316 "is_configured": true, 00:09:32.316 "data_offset": 0, 00:09:32.316 "data_size": 65536 00:09:32.316 }, 00:09:32.316 { 00:09:32.316 "name": "BaseBdev3", 00:09:32.316 "uuid": "d26be723-a034-4d39-90f9-d2d894f86dff", 00:09:32.316 "is_configured": true, 00:09:32.316 "data_offset": 0, 00:09:32.316 "data_size": 65536 00:09:32.316 }, 00:09:32.316 { 00:09:32.316 "name": "BaseBdev4", 00:09:32.316 "uuid": "2a0cf5b2-c55b-4beb-83ef-8ee30afb9354", 00:09:32.316 "is_configured": true, 00:09:32.316 "data_offset": 0, 00:09:32.316 "data_size": 65536 00:09:32.316 } 00:09:32.316 ] 00:09:32.316 }' 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.316 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.886 [2024-11-26 12:52:50.332876] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.886 [2024-11-26 12:52:50.403846] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.886 [2024-11-26 12:52:50.470750] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:32.886 [2024-11-26 12:52:50.470798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.886 BaseBdev2 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.886 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:32.887 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.887 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.887 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.887 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.887 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.887 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.887 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.887 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.887 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 [ 00:09:33.148 { 00:09:33.148 "name": "BaseBdev2", 00:09:33.148 "aliases": [ 00:09:33.148 "3fbdd971-5a03-4de1-b8c9-be28816f263c" 00:09:33.148 ], 00:09:33.148 "product_name": "Malloc disk", 00:09:33.148 "block_size": 512, 00:09:33.148 "num_blocks": 65536, 00:09:33.148 "uuid": "3fbdd971-5a03-4de1-b8c9-be28816f263c", 00:09:33.148 "assigned_rate_limits": { 00:09:33.148 "rw_ios_per_sec": 0, 00:09:33.148 "rw_mbytes_per_sec": 0, 00:09:33.148 "r_mbytes_per_sec": 0, 00:09:33.148 "w_mbytes_per_sec": 0 00:09:33.148 }, 00:09:33.148 "claimed": false, 00:09:33.148 "zoned": false, 00:09:33.148 "supported_io_types": { 00:09:33.148 "read": true, 00:09:33.148 "write": true, 00:09:33.148 "unmap": true, 00:09:33.148 "flush": true, 00:09:33.148 "reset": true, 00:09:33.148 "nvme_admin": false, 00:09:33.148 "nvme_io": false, 00:09:33.148 "nvme_io_md": false, 00:09:33.148 "write_zeroes": true, 00:09:33.148 "zcopy": true, 00:09:33.148 "get_zone_info": false, 00:09:33.148 "zone_management": false, 00:09:33.148 "zone_append": false, 00:09:33.148 "compare": false, 00:09:33.148 "compare_and_write": false, 00:09:33.148 "abort": true, 00:09:33.148 "seek_hole": false, 00:09:33.148 "seek_data": false, 00:09:33.148 "copy": true, 00:09:33.148 "nvme_iov_md": false 00:09:33.148 }, 00:09:33.148 "memory_domains": [ 00:09:33.148 { 00:09:33.148 "dma_device_id": "system", 00:09:33.148 "dma_device_type": 1 00:09:33.148 }, 00:09:33.148 { 00:09:33.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.148 "dma_device_type": 2 00:09:33.148 } 00:09:33.148 ], 00:09:33.148 "driver_specific": {} 00:09:33.148 } 00:09:33.148 ] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 BaseBdev3 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 [ 00:09:33.148 { 00:09:33.148 "name": "BaseBdev3", 00:09:33.148 "aliases": [ 00:09:33.148 "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd" 00:09:33.148 ], 00:09:33.148 "product_name": "Malloc disk", 00:09:33.148 "block_size": 512, 00:09:33.148 "num_blocks": 65536, 00:09:33.148 "uuid": "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd", 00:09:33.148 "assigned_rate_limits": { 00:09:33.148 "rw_ios_per_sec": 0, 00:09:33.148 "rw_mbytes_per_sec": 0, 00:09:33.148 "r_mbytes_per_sec": 0, 00:09:33.148 "w_mbytes_per_sec": 0 00:09:33.148 }, 00:09:33.148 "claimed": false, 00:09:33.148 "zoned": false, 00:09:33.148 "supported_io_types": { 00:09:33.148 "read": true, 00:09:33.148 "write": true, 00:09:33.148 "unmap": true, 00:09:33.148 "flush": true, 00:09:33.148 "reset": true, 00:09:33.148 "nvme_admin": false, 00:09:33.148 "nvme_io": false, 00:09:33.148 "nvme_io_md": false, 00:09:33.148 "write_zeroes": true, 00:09:33.148 "zcopy": true, 00:09:33.148 "get_zone_info": false, 00:09:33.148 "zone_management": false, 00:09:33.148 "zone_append": false, 00:09:33.148 "compare": false, 00:09:33.148 "compare_and_write": false, 00:09:33.148 "abort": true, 00:09:33.148 "seek_hole": false, 00:09:33.148 "seek_data": false, 00:09:33.148 "copy": true, 00:09:33.148 "nvme_iov_md": false 00:09:33.148 }, 00:09:33.148 "memory_domains": [ 00:09:33.148 { 00:09:33.148 "dma_device_id": "system", 00:09:33.148 "dma_device_type": 1 00:09:33.148 }, 00:09:33.148 { 00:09:33.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.148 "dma_device_type": 2 00:09:33.148 } 00:09:33.148 ], 00:09:33.148 "driver_specific": {} 00:09:33.148 } 00:09:33.148 ] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 BaseBdev4 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 [ 00:09:33.148 { 00:09:33.148 "name": "BaseBdev4", 00:09:33.148 "aliases": [ 00:09:33.148 "70b8e1b7-be32-45c6-b47e-0425ce8d9379" 00:09:33.148 ], 00:09:33.148 "product_name": "Malloc disk", 00:09:33.148 "block_size": 512, 00:09:33.148 "num_blocks": 65536, 00:09:33.148 "uuid": "70b8e1b7-be32-45c6-b47e-0425ce8d9379", 00:09:33.148 "assigned_rate_limits": { 00:09:33.148 "rw_ios_per_sec": 0, 00:09:33.148 "rw_mbytes_per_sec": 0, 00:09:33.148 "r_mbytes_per_sec": 0, 00:09:33.148 "w_mbytes_per_sec": 0 00:09:33.148 }, 00:09:33.148 "claimed": false, 00:09:33.148 "zoned": false, 00:09:33.148 "supported_io_types": { 00:09:33.148 "read": true, 00:09:33.148 "write": true, 00:09:33.148 "unmap": true, 00:09:33.148 "flush": true, 00:09:33.148 "reset": true, 00:09:33.148 "nvme_admin": false, 00:09:33.148 "nvme_io": false, 00:09:33.148 "nvme_io_md": false, 00:09:33.148 "write_zeroes": true, 00:09:33.148 "zcopy": true, 00:09:33.148 "get_zone_info": false, 00:09:33.148 "zone_management": false, 00:09:33.148 "zone_append": false, 00:09:33.148 "compare": false, 00:09:33.148 "compare_and_write": false, 00:09:33.148 "abort": true, 00:09:33.148 "seek_hole": false, 00:09:33.148 "seek_data": false, 00:09:33.148 "copy": true, 00:09:33.148 "nvme_iov_md": false 00:09:33.148 }, 00:09:33.148 "memory_domains": [ 00:09:33.148 { 00:09:33.148 "dma_device_id": "system", 00:09:33.148 "dma_device_type": 1 00:09:33.148 }, 00:09:33.148 { 00:09:33.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.148 "dma_device_type": 2 00:09:33.148 } 00:09:33.148 ], 00:09:33.148 "driver_specific": {} 00:09:33.148 } 00:09:33.148 ] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.148 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.149 [2024-11-26 12:52:50.698153] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.149 [2024-11-26 12:52:50.698203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.149 [2024-11-26 12:52:50.698225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.149 [2024-11-26 12:52:50.699954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.149 [2024-11-26 12:52:50.700005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.149 "name": "Existed_Raid", 00:09:33.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.149 "strip_size_kb": 64, 00:09:33.149 "state": "configuring", 00:09:33.149 "raid_level": "raid0", 00:09:33.149 "superblock": false, 00:09:33.149 "num_base_bdevs": 4, 00:09:33.149 "num_base_bdevs_discovered": 3, 00:09:33.149 "num_base_bdevs_operational": 4, 00:09:33.149 "base_bdevs_list": [ 00:09:33.149 { 00:09:33.149 "name": "BaseBdev1", 00:09:33.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.149 "is_configured": false, 00:09:33.149 "data_offset": 0, 00:09:33.149 "data_size": 0 00:09:33.149 }, 00:09:33.149 { 00:09:33.149 "name": "BaseBdev2", 00:09:33.149 "uuid": "3fbdd971-5a03-4de1-b8c9-be28816f263c", 00:09:33.149 "is_configured": true, 00:09:33.149 "data_offset": 0, 00:09:33.149 "data_size": 65536 00:09:33.149 }, 00:09:33.149 { 00:09:33.149 "name": "BaseBdev3", 00:09:33.149 "uuid": "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd", 00:09:33.149 "is_configured": true, 00:09:33.149 "data_offset": 0, 00:09:33.149 "data_size": 65536 00:09:33.149 }, 00:09:33.149 { 00:09:33.149 "name": "BaseBdev4", 00:09:33.149 "uuid": "70b8e1b7-be32-45c6-b47e-0425ce8d9379", 00:09:33.149 "is_configured": true, 00:09:33.149 "data_offset": 0, 00:09:33.149 "data_size": 65536 00:09:33.149 } 00:09:33.149 ] 00:09:33.149 }' 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.149 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.410 [2024-11-26 12:52:51.053519] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.410 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.670 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.670 "name": "Existed_Raid", 00:09:33.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.670 "strip_size_kb": 64, 00:09:33.670 "state": "configuring", 00:09:33.670 "raid_level": "raid0", 00:09:33.670 "superblock": false, 00:09:33.670 "num_base_bdevs": 4, 00:09:33.670 "num_base_bdevs_discovered": 2, 00:09:33.670 "num_base_bdevs_operational": 4, 00:09:33.670 "base_bdevs_list": [ 00:09:33.670 { 00:09:33.670 "name": "BaseBdev1", 00:09:33.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.670 "is_configured": false, 00:09:33.670 "data_offset": 0, 00:09:33.670 "data_size": 0 00:09:33.670 }, 00:09:33.670 { 00:09:33.670 "name": null, 00:09:33.670 "uuid": "3fbdd971-5a03-4de1-b8c9-be28816f263c", 00:09:33.670 "is_configured": false, 00:09:33.670 "data_offset": 0, 00:09:33.670 "data_size": 65536 00:09:33.670 }, 00:09:33.670 { 00:09:33.670 "name": "BaseBdev3", 00:09:33.670 "uuid": "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd", 00:09:33.670 "is_configured": true, 00:09:33.670 "data_offset": 0, 00:09:33.670 "data_size": 65536 00:09:33.670 }, 00:09:33.670 { 00:09:33.670 "name": "BaseBdev4", 00:09:33.670 "uuid": "70b8e1b7-be32-45c6-b47e-0425ce8d9379", 00:09:33.670 "is_configured": true, 00:09:33.670 "data_offset": 0, 00:09:33.670 "data_size": 65536 00:09:33.670 } 00:09:33.670 ] 00:09:33.670 }' 00:09:33.670 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.670 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.930 [2024-11-26 12:52:51.547635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.930 BaseBdev1 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.930 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.930 [ 00:09:33.930 { 00:09:33.930 "name": "BaseBdev1", 00:09:33.930 "aliases": [ 00:09:33.930 "ad1d6420-38a0-4b46-bcef-85b676917503" 00:09:33.930 ], 00:09:33.930 "product_name": "Malloc disk", 00:09:33.930 "block_size": 512, 00:09:33.930 "num_blocks": 65536, 00:09:33.930 "uuid": "ad1d6420-38a0-4b46-bcef-85b676917503", 00:09:33.930 "assigned_rate_limits": { 00:09:33.930 "rw_ios_per_sec": 0, 00:09:33.930 "rw_mbytes_per_sec": 0, 00:09:33.930 "r_mbytes_per_sec": 0, 00:09:33.930 "w_mbytes_per_sec": 0 00:09:33.930 }, 00:09:33.930 "claimed": true, 00:09:33.930 "claim_type": "exclusive_write", 00:09:33.930 "zoned": false, 00:09:33.930 "supported_io_types": { 00:09:33.930 "read": true, 00:09:33.930 "write": true, 00:09:33.930 "unmap": true, 00:09:33.930 "flush": true, 00:09:33.930 "reset": true, 00:09:33.930 "nvme_admin": false, 00:09:33.930 "nvme_io": false, 00:09:33.930 "nvme_io_md": false, 00:09:33.930 "write_zeroes": true, 00:09:33.930 "zcopy": true, 00:09:33.930 "get_zone_info": false, 00:09:33.930 "zone_management": false, 00:09:33.930 "zone_append": false, 00:09:33.930 "compare": false, 00:09:33.931 "compare_and_write": false, 00:09:33.931 "abort": true, 00:09:33.931 "seek_hole": false, 00:09:33.931 "seek_data": false, 00:09:33.931 "copy": true, 00:09:33.931 "nvme_iov_md": false 00:09:33.931 }, 00:09:33.931 "memory_domains": [ 00:09:33.931 { 00:09:33.931 "dma_device_id": "system", 00:09:33.931 "dma_device_type": 1 00:09:33.931 }, 00:09:33.931 { 00:09:33.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.931 "dma_device_type": 2 00:09:33.931 } 00:09:33.931 ], 00:09:33.931 "driver_specific": {} 00:09:33.931 } 00:09:33.931 ] 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.931 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.191 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.191 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.191 "name": "Existed_Raid", 00:09:34.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.191 "strip_size_kb": 64, 00:09:34.191 "state": "configuring", 00:09:34.191 "raid_level": "raid0", 00:09:34.191 "superblock": false, 00:09:34.191 "num_base_bdevs": 4, 00:09:34.191 "num_base_bdevs_discovered": 3, 00:09:34.191 "num_base_bdevs_operational": 4, 00:09:34.191 "base_bdevs_list": [ 00:09:34.191 { 00:09:34.191 "name": "BaseBdev1", 00:09:34.191 "uuid": "ad1d6420-38a0-4b46-bcef-85b676917503", 00:09:34.191 "is_configured": true, 00:09:34.191 "data_offset": 0, 00:09:34.191 "data_size": 65536 00:09:34.191 }, 00:09:34.191 { 00:09:34.191 "name": null, 00:09:34.191 "uuid": "3fbdd971-5a03-4de1-b8c9-be28816f263c", 00:09:34.191 "is_configured": false, 00:09:34.191 "data_offset": 0, 00:09:34.191 "data_size": 65536 00:09:34.191 }, 00:09:34.191 { 00:09:34.191 "name": "BaseBdev3", 00:09:34.191 "uuid": "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd", 00:09:34.191 "is_configured": true, 00:09:34.191 "data_offset": 0, 00:09:34.191 "data_size": 65536 00:09:34.191 }, 00:09:34.191 { 00:09:34.191 "name": "BaseBdev4", 00:09:34.191 "uuid": "70b8e1b7-be32-45c6-b47e-0425ce8d9379", 00:09:34.191 "is_configured": true, 00:09:34.191 "data_offset": 0, 00:09:34.191 "data_size": 65536 00:09:34.191 } 00:09:34.191 ] 00:09:34.191 }' 00:09:34.191 12:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.191 12:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.451 [2024-11-26 12:52:52.034888] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.451 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.451 "name": "Existed_Raid", 00:09:34.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.451 "strip_size_kb": 64, 00:09:34.451 "state": "configuring", 00:09:34.451 "raid_level": "raid0", 00:09:34.451 "superblock": false, 00:09:34.451 "num_base_bdevs": 4, 00:09:34.451 "num_base_bdevs_discovered": 2, 00:09:34.451 "num_base_bdevs_operational": 4, 00:09:34.451 "base_bdevs_list": [ 00:09:34.451 { 00:09:34.451 "name": "BaseBdev1", 00:09:34.451 "uuid": "ad1d6420-38a0-4b46-bcef-85b676917503", 00:09:34.451 "is_configured": true, 00:09:34.451 "data_offset": 0, 00:09:34.451 "data_size": 65536 00:09:34.451 }, 00:09:34.451 { 00:09:34.451 "name": null, 00:09:34.451 "uuid": "3fbdd971-5a03-4de1-b8c9-be28816f263c", 00:09:34.451 "is_configured": false, 00:09:34.451 "data_offset": 0, 00:09:34.451 "data_size": 65536 00:09:34.452 }, 00:09:34.452 { 00:09:34.452 "name": null, 00:09:34.452 "uuid": "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd", 00:09:34.452 "is_configured": false, 00:09:34.452 "data_offset": 0, 00:09:34.452 "data_size": 65536 00:09:34.452 }, 00:09:34.452 { 00:09:34.452 "name": "BaseBdev4", 00:09:34.452 "uuid": "70b8e1b7-be32-45c6-b47e-0425ce8d9379", 00:09:34.452 "is_configured": true, 00:09:34.452 "data_offset": 0, 00:09:34.452 "data_size": 65536 00:09:34.452 } 00:09:34.452 ] 00:09:34.452 }' 00:09:34.452 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.452 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.020 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.021 [2024-11-26 12:52:52.482198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.021 "name": "Existed_Raid", 00:09:35.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.021 "strip_size_kb": 64, 00:09:35.021 "state": "configuring", 00:09:35.021 "raid_level": "raid0", 00:09:35.021 "superblock": false, 00:09:35.021 "num_base_bdevs": 4, 00:09:35.021 "num_base_bdevs_discovered": 3, 00:09:35.021 "num_base_bdevs_operational": 4, 00:09:35.021 "base_bdevs_list": [ 00:09:35.021 { 00:09:35.021 "name": "BaseBdev1", 00:09:35.021 "uuid": "ad1d6420-38a0-4b46-bcef-85b676917503", 00:09:35.021 "is_configured": true, 00:09:35.021 "data_offset": 0, 00:09:35.021 "data_size": 65536 00:09:35.021 }, 00:09:35.021 { 00:09:35.021 "name": null, 00:09:35.021 "uuid": "3fbdd971-5a03-4de1-b8c9-be28816f263c", 00:09:35.021 "is_configured": false, 00:09:35.021 "data_offset": 0, 00:09:35.021 "data_size": 65536 00:09:35.021 }, 00:09:35.021 { 00:09:35.021 "name": "BaseBdev3", 00:09:35.021 "uuid": "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd", 00:09:35.021 "is_configured": true, 00:09:35.021 "data_offset": 0, 00:09:35.021 "data_size": 65536 00:09:35.021 }, 00:09:35.021 { 00:09:35.021 "name": "BaseBdev4", 00:09:35.021 "uuid": "70b8e1b7-be32-45c6-b47e-0425ce8d9379", 00:09:35.021 "is_configured": true, 00:09:35.021 "data_offset": 0, 00:09:35.021 "data_size": 65536 00:09:35.021 } 00:09:35.021 ] 00:09:35.021 }' 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.021 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.282 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.282 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.282 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.282 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.282 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.282 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:35.282 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.282 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.282 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.282 [2024-11-26 12:52:52.949379] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.542 12:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.542 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.542 "name": "Existed_Raid", 00:09:35.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.542 "strip_size_kb": 64, 00:09:35.542 "state": "configuring", 00:09:35.542 "raid_level": "raid0", 00:09:35.542 "superblock": false, 00:09:35.542 "num_base_bdevs": 4, 00:09:35.542 "num_base_bdevs_discovered": 2, 00:09:35.542 "num_base_bdevs_operational": 4, 00:09:35.542 "base_bdevs_list": [ 00:09:35.542 { 00:09:35.542 "name": null, 00:09:35.542 "uuid": "ad1d6420-38a0-4b46-bcef-85b676917503", 00:09:35.542 "is_configured": false, 00:09:35.542 "data_offset": 0, 00:09:35.542 "data_size": 65536 00:09:35.542 }, 00:09:35.542 { 00:09:35.542 "name": null, 00:09:35.542 "uuid": "3fbdd971-5a03-4de1-b8c9-be28816f263c", 00:09:35.542 "is_configured": false, 00:09:35.542 "data_offset": 0, 00:09:35.542 "data_size": 65536 00:09:35.542 }, 00:09:35.542 { 00:09:35.542 "name": "BaseBdev3", 00:09:35.542 "uuid": "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd", 00:09:35.542 "is_configured": true, 00:09:35.542 "data_offset": 0, 00:09:35.542 "data_size": 65536 00:09:35.542 }, 00:09:35.542 { 00:09:35.542 "name": "BaseBdev4", 00:09:35.542 "uuid": "70b8e1b7-be32-45c6-b47e-0425ce8d9379", 00:09:35.542 "is_configured": true, 00:09:35.542 "data_offset": 0, 00:09:35.542 "data_size": 65536 00:09:35.542 } 00:09:35.542 ] 00:09:35.542 }' 00:09:35.542 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.542 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.802 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.802 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.802 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.802 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.802 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.062 [2024-11-26 12:52:53.502949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.062 "name": "Existed_Raid", 00:09:36.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.062 "strip_size_kb": 64, 00:09:36.062 "state": "configuring", 00:09:36.062 "raid_level": "raid0", 00:09:36.062 "superblock": false, 00:09:36.062 "num_base_bdevs": 4, 00:09:36.062 "num_base_bdevs_discovered": 3, 00:09:36.062 "num_base_bdevs_operational": 4, 00:09:36.062 "base_bdevs_list": [ 00:09:36.062 { 00:09:36.062 "name": null, 00:09:36.062 "uuid": "ad1d6420-38a0-4b46-bcef-85b676917503", 00:09:36.062 "is_configured": false, 00:09:36.062 "data_offset": 0, 00:09:36.062 "data_size": 65536 00:09:36.062 }, 00:09:36.062 { 00:09:36.062 "name": "BaseBdev2", 00:09:36.062 "uuid": "3fbdd971-5a03-4de1-b8c9-be28816f263c", 00:09:36.062 "is_configured": true, 00:09:36.062 "data_offset": 0, 00:09:36.062 "data_size": 65536 00:09:36.062 }, 00:09:36.062 { 00:09:36.062 "name": "BaseBdev3", 00:09:36.062 "uuid": "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd", 00:09:36.062 "is_configured": true, 00:09:36.062 "data_offset": 0, 00:09:36.062 "data_size": 65536 00:09:36.062 }, 00:09:36.062 { 00:09:36.062 "name": "BaseBdev4", 00:09:36.062 "uuid": "70b8e1b7-be32-45c6-b47e-0425ce8d9379", 00:09:36.062 "is_configured": true, 00:09:36.062 "data_offset": 0, 00:09:36.062 "data_size": 65536 00:09:36.062 } 00:09:36.062 ] 00:09:36.062 }' 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.062 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.322 12:52:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ad1d6420-38a0-4b46-bcef-85b676917503 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.583 [2024-11-26 12:52:54.020988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:36.583 [2024-11-26 12:52:54.021035] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:36.583 [2024-11-26 12:52:54.021043] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:36.583 [2024-11-26 12:52:54.021309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:36.583 [2024-11-26 12:52:54.021429] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:36.583 [2024-11-26 12:52:54.021445] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:36.583 [2024-11-26 12:52:54.021608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.583 NewBaseBdev 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.583 [ 00:09:36.583 { 00:09:36.583 "name": "NewBaseBdev", 00:09:36.583 "aliases": [ 00:09:36.583 "ad1d6420-38a0-4b46-bcef-85b676917503" 00:09:36.583 ], 00:09:36.583 "product_name": "Malloc disk", 00:09:36.583 "block_size": 512, 00:09:36.583 "num_blocks": 65536, 00:09:36.583 "uuid": "ad1d6420-38a0-4b46-bcef-85b676917503", 00:09:36.583 "assigned_rate_limits": { 00:09:36.583 "rw_ios_per_sec": 0, 00:09:36.583 "rw_mbytes_per_sec": 0, 00:09:36.583 "r_mbytes_per_sec": 0, 00:09:36.583 "w_mbytes_per_sec": 0 00:09:36.583 }, 00:09:36.583 "claimed": true, 00:09:36.583 "claim_type": "exclusive_write", 00:09:36.583 "zoned": false, 00:09:36.583 "supported_io_types": { 00:09:36.583 "read": true, 00:09:36.583 "write": true, 00:09:36.583 "unmap": true, 00:09:36.583 "flush": true, 00:09:36.583 "reset": true, 00:09:36.583 "nvme_admin": false, 00:09:36.583 "nvme_io": false, 00:09:36.583 "nvme_io_md": false, 00:09:36.583 "write_zeroes": true, 00:09:36.583 "zcopy": true, 00:09:36.583 "get_zone_info": false, 00:09:36.583 "zone_management": false, 00:09:36.583 "zone_append": false, 00:09:36.583 "compare": false, 00:09:36.583 "compare_and_write": false, 00:09:36.583 "abort": true, 00:09:36.583 "seek_hole": false, 00:09:36.583 "seek_data": false, 00:09:36.583 "copy": true, 00:09:36.583 "nvme_iov_md": false 00:09:36.583 }, 00:09:36.583 "memory_domains": [ 00:09:36.583 { 00:09:36.583 "dma_device_id": "system", 00:09:36.583 "dma_device_type": 1 00:09:36.583 }, 00:09:36.583 { 00:09:36.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.583 "dma_device_type": 2 00:09:36.583 } 00:09:36.583 ], 00:09:36.583 "driver_specific": {} 00:09:36.583 } 00:09:36.583 ] 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.583 "name": "Existed_Raid", 00:09:36.583 "uuid": "f639ba4c-cf72-4f57-aca1-5ed0e1c52b4a", 00:09:36.583 "strip_size_kb": 64, 00:09:36.583 "state": "online", 00:09:36.583 "raid_level": "raid0", 00:09:36.583 "superblock": false, 00:09:36.583 "num_base_bdevs": 4, 00:09:36.583 "num_base_bdevs_discovered": 4, 00:09:36.583 "num_base_bdevs_operational": 4, 00:09:36.583 "base_bdevs_list": [ 00:09:36.583 { 00:09:36.583 "name": "NewBaseBdev", 00:09:36.583 "uuid": "ad1d6420-38a0-4b46-bcef-85b676917503", 00:09:36.583 "is_configured": true, 00:09:36.583 "data_offset": 0, 00:09:36.583 "data_size": 65536 00:09:36.583 }, 00:09:36.583 { 00:09:36.583 "name": "BaseBdev2", 00:09:36.583 "uuid": "3fbdd971-5a03-4de1-b8c9-be28816f263c", 00:09:36.583 "is_configured": true, 00:09:36.583 "data_offset": 0, 00:09:36.583 "data_size": 65536 00:09:36.583 }, 00:09:36.583 { 00:09:36.583 "name": "BaseBdev3", 00:09:36.583 "uuid": "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd", 00:09:36.583 "is_configured": true, 00:09:36.583 "data_offset": 0, 00:09:36.583 "data_size": 65536 00:09:36.583 }, 00:09:36.583 { 00:09:36.583 "name": "BaseBdev4", 00:09:36.583 "uuid": "70b8e1b7-be32-45c6-b47e-0425ce8d9379", 00:09:36.583 "is_configured": true, 00:09:36.583 "data_offset": 0, 00:09:36.583 "data_size": 65536 00:09:36.583 } 00:09:36.583 ] 00:09:36.583 }' 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.583 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.844 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.844 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.844 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.844 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.844 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.844 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.104 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.104 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.104 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.104 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.104 [2024-11-26 12:52:54.532480] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.104 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.104 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.104 "name": "Existed_Raid", 00:09:37.104 "aliases": [ 00:09:37.104 "f639ba4c-cf72-4f57-aca1-5ed0e1c52b4a" 00:09:37.104 ], 00:09:37.104 "product_name": "Raid Volume", 00:09:37.104 "block_size": 512, 00:09:37.104 "num_blocks": 262144, 00:09:37.104 "uuid": "f639ba4c-cf72-4f57-aca1-5ed0e1c52b4a", 00:09:37.104 "assigned_rate_limits": { 00:09:37.104 "rw_ios_per_sec": 0, 00:09:37.104 "rw_mbytes_per_sec": 0, 00:09:37.104 "r_mbytes_per_sec": 0, 00:09:37.104 "w_mbytes_per_sec": 0 00:09:37.104 }, 00:09:37.104 "claimed": false, 00:09:37.104 "zoned": false, 00:09:37.104 "supported_io_types": { 00:09:37.104 "read": true, 00:09:37.104 "write": true, 00:09:37.104 "unmap": true, 00:09:37.104 "flush": true, 00:09:37.104 "reset": true, 00:09:37.104 "nvme_admin": false, 00:09:37.104 "nvme_io": false, 00:09:37.104 "nvme_io_md": false, 00:09:37.104 "write_zeroes": true, 00:09:37.104 "zcopy": false, 00:09:37.104 "get_zone_info": false, 00:09:37.104 "zone_management": false, 00:09:37.104 "zone_append": false, 00:09:37.104 "compare": false, 00:09:37.104 "compare_and_write": false, 00:09:37.104 "abort": false, 00:09:37.104 "seek_hole": false, 00:09:37.104 "seek_data": false, 00:09:37.104 "copy": false, 00:09:37.104 "nvme_iov_md": false 00:09:37.104 }, 00:09:37.104 "memory_domains": [ 00:09:37.104 { 00:09:37.104 "dma_device_id": "system", 00:09:37.104 "dma_device_type": 1 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.104 "dma_device_type": 2 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "system", 00:09:37.104 "dma_device_type": 1 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.104 "dma_device_type": 2 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "system", 00:09:37.104 "dma_device_type": 1 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.104 "dma_device_type": 2 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "system", 00:09:37.104 "dma_device_type": 1 00:09:37.104 }, 00:09:37.104 { 00:09:37.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.105 "dma_device_type": 2 00:09:37.105 } 00:09:37.105 ], 00:09:37.105 "driver_specific": { 00:09:37.105 "raid": { 00:09:37.105 "uuid": "f639ba4c-cf72-4f57-aca1-5ed0e1c52b4a", 00:09:37.105 "strip_size_kb": 64, 00:09:37.105 "state": "online", 00:09:37.105 "raid_level": "raid0", 00:09:37.105 "superblock": false, 00:09:37.105 "num_base_bdevs": 4, 00:09:37.105 "num_base_bdevs_discovered": 4, 00:09:37.105 "num_base_bdevs_operational": 4, 00:09:37.105 "base_bdevs_list": [ 00:09:37.105 { 00:09:37.105 "name": "NewBaseBdev", 00:09:37.105 "uuid": "ad1d6420-38a0-4b46-bcef-85b676917503", 00:09:37.105 "is_configured": true, 00:09:37.105 "data_offset": 0, 00:09:37.105 "data_size": 65536 00:09:37.105 }, 00:09:37.105 { 00:09:37.105 "name": "BaseBdev2", 00:09:37.105 "uuid": "3fbdd971-5a03-4de1-b8c9-be28816f263c", 00:09:37.105 "is_configured": true, 00:09:37.105 "data_offset": 0, 00:09:37.105 "data_size": 65536 00:09:37.105 }, 00:09:37.105 { 00:09:37.105 "name": "BaseBdev3", 00:09:37.105 "uuid": "8a1c17df-08d5-4d6b-8f58-8c0b07e0bffd", 00:09:37.105 "is_configured": true, 00:09:37.105 "data_offset": 0, 00:09:37.105 "data_size": 65536 00:09:37.105 }, 00:09:37.105 { 00:09:37.105 "name": "BaseBdev4", 00:09:37.105 "uuid": "70b8e1b7-be32-45c6-b47e-0425ce8d9379", 00:09:37.105 "is_configured": true, 00:09:37.105 "data_offset": 0, 00:09:37.105 "data_size": 65536 00:09:37.105 } 00:09:37.105 ] 00:09:37.105 } 00:09:37.105 } 00:09:37.105 }' 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:37.105 BaseBdev2 00:09:37.105 BaseBdev3 00:09:37.105 BaseBdev4' 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.105 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.366 [2024-11-26 12:52:54.847588] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.366 [2024-11-26 12:52:54.847617] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.366 [2024-11-26 12:52:54.847676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.366 [2024-11-26 12:52:54.847735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.366 [2024-11-26 12:52:54.847754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80584 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80584 ']' 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80584 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80584 00:09:37.366 killing process with pid 80584 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80584' 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80584 00:09:37.366 [2024-11-26 12:52:54.891438] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.366 12:52:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80584 00:09:37.366 [2024-11-26 12:52:54.931501] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.626 12:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:37.626 00:09:37.626 real 0m9.261s 00:09:37.626 user 0m15.813s 00:09:37.626 sys 0m1.853s 00:09:37.626 12:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.626 12:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.626 ************************************ 00:09:37.626 END TEST raid_state_function_test 00:09:37.626 ************************************ 00:09:37.626 12:52:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:37.626 12:52:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:37.626 12:52:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.626 12:52:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.626 ************************************ 00:09:37.626 START TEST raid_state_function_test_sb 00:09:37.626 ************************************ 00:09:37.626 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:37.626 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:37.626 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:37.626 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:37.626 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81229 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:37.627 Process raid pid: 81229 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81229' 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81229 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81229 ']' 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.627 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.887 [2024-11-26 12:52:55.343124] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:37.887 [2024-11-26 12:52:55.343279] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.887 [2024-11-26 12:52:55.504301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.887 [2024-11-26 12:52:55.548603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.147 [2024-11-26 12:52:55.591081] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.147 [2024-11-26 12:52:55.591117] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.717 [2024-11-26 12:52:56.168649] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.717 [2024-11-26 12:52:56.168696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.717 [2024-11-26 12:52:56.168725] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.717 [2024-11-26 12:52:56.168736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.717 [2024-11-26 12:52:56.168742] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.717 [2024-11-26 12:52:56.168753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.717 [2024-11-26 12:52:56.168759] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:38.717 [2024-11-26 12:52:56.168767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.717 "name": "Existed_Raid", 00:09:38.717 "uuid": "2f7d9201-0cfb-4122-9819-1ad2fe557f85", 00:09:38.717 "strip_size_kb": 64, 00:09:38.717 "state": "configuring", 00:09:38.717 "raid_level": "raid0", 00:09:38.717 "superblock": true, 00:09:38.717 "num_base_bdevs": 4, 00:09:38.717 "num_base_bdevs_discovered": 0, 00:09:38.717 "num_base_bdevs_operational": 4, 00:09:38.717 "base_bdevs_list": [ 00:09:38.717 { 00:09:38.717 "name": "BaseBdev1", 00:09:38.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.717 "is_configured": false, 00:09:38.717 "data_offset": 0, 00:09:38.717 "data_size": 0 00:09:38.717 }, 00:09:38.717 { 00:09:38.717 "name": "BaseBdev2", 00:09:38.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.717 "is_configured": false, 00:09:38.717 "data_offset": 0, 00:09:38.717 "data_size": 0 00:09:38.717 }, 00:09:38.717 { 00:09:38.717 "name": "BaseBdev3", 00:09:38.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.717 "is_configured": false, 00:09:38.717 "data_offset": 0, 00:09:38.717 "data_size": 0 00:09:38.717 }, 00:09:38.717 { 00:09:38.717 "name": "BaseBdev4", 00:09:38.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.717 "is_configured": false, 00:09:38.717 "data_offset": 0, 00:09:38.717 "data_size": 0 00:09:38.717 } 00:09:38.717 ] 00:09:38.717 }' 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.717 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.978 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.978 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.979 [2024-11-26 12:52:56.563844] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.979 [2024-11-26 12:52:56.563889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.979 [2024-11-26 12:52:56.571878] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.979 [2024-11-26 12:52:56.571916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.979 [2024-11-26 12:52:56.571924] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.979 [2024-11-26 12:52:56.571933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.979 [2024-11-26 12:52:56.571939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.979 [2024-11-26 12:52:56.571947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.979 [2024-11-26 12:52:56.571953] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:38.979 [2024-11-26 12:52:56.571962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.979 [2024-11-26 12:52:56.588703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.979 BaseBdev1 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.979 [ 00:09:38.979 { 00:09:38.979 "name": "BaseBdev1", 00:09:38.979 "aliases": [ 00:09:38.979 "5547c388-5107-4e6a-a1f8-53e74d17be28" 00:09:38.979 ], 00:09:38.979 "product_name": "Malloc disk", 00:09:38.979 "block_size": 512, 00:09:38.979 "num_blocks": 65536, 00:09:38.979 "uuid": "5547c388-5107-4e6a-a1f8-53e74d17be28", 00:09:38.979 "assigned_rate_limits": { 00:09:38.979 "rw_ios_per_sec": 0, 00:09:38.979 "rw_mbytes_per_sec": 0, 00:09:38.979 "r_mbytes_per_sec": 0, 00:09:38.979 "w_mbytes_per_sec": 0 00:09:38.979 }, 00:09:38.979 "claimed": true, 00:09:38.979 "claim_type": "exclusive_write", 00:09:38.979 "zoned": false, 00:09:38.979 "supported_io_types": { 00:09:38.979 "read": true, 00:09:38.979 "write": true, 00:09:38.979 "unmap": true, 00:09:38.979 "flush": true, 00:09:38.979 "reset": true, 00:09:38.979 "nvme_admin": false, 00:09:38.979 "nvme_io": false, 00:09:38.979 "nvme_io_md": false, 00:09:38.979 "write_zeroes": true, 00:09:38.979 "zcopy": true, 00:09:38.979 "get_zone_info": false, 00:09:38.979 "zone_management": false, 00:09:38.979 "zone_append": false, 00:09:38.979 "compare": false, 00:09:38.979 "compare_and_write": false, 00:09:38.979 "abort": true, 00:09:38.979 "seek_hole": false, 00:09:38.979 "seek_data": false, 00:09:38.979 "copy": true, 00:09:38.979 "nvme_iov_md": false 00:09:38.979 }, 00:09:38.979 "memory_domains": [ 00:09:38.979 { 00:09:38.979 "dma_device_id": "system", 00:09:38.979 "dma_device_type": 1 00:09:38.979 }, 00:09:38.979 { 00:09:38.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.979 "dma_device_type": 2 00:09:38.979 } 00:09:38.979 ], 00:09:38.979 "driver_specific": {} 00:09:38.979 } 00:09:38.979 ] 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.979 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.239 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.239 "name": "Existed_Raid", 00:09:39.239 "uuid": "ec902e2c-482d-4965-9f82-8f7ce17e7616", 00:09:39.239 "strip_size_kb": 64, 00:09:39.239 "state": "configuring", 00:09:39.239 "raid_level": "raid0", 00:09:39.239 "superblock": true, 00:09:39.239 "num_base_bdevs": 4, 00:09:39.239 "num_base_bdevs_discovered": 1, 00:09:39.239 "num_base_bdevs_operational": 4, 00:09:39.239 "base_bdevs_list": [ 00:09:39.239 { 00:09:39.240 "name": "BaseBdev1", 00:09:39.240 "uuid": "5547c388-5107-4e6a-a1f8-53e74d17be28", 00:09:39.240 "is_configured": true, 00:09:39.240 "data_offset": 2048, 00:09:39.240 "data_size": 63488 00:09:39.240 }, 00:09:39.240 { 00:09:39.240 "name": "BaseBdev2", 00:09:39.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.240 "is_configured": false, 00:09:39.240 "data_offset": 0, 00:09:39.240 "data_size": 0 00:09:39.240 }, 00:09:39.240 { 00:09:39.240 "name": "BaseBdev3", 00:09:39.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.240 "is_configured": false, 00:09:39.240 "data_offset": 0, 00:09:39.240 "data_size": 0 00:09:39.240 }, 00:09:39.240 { 00:09:39.240 "name": "BaseBdev4", 00:09:39.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.240 "is_configured": false, 00:09:39.240 "data_offset": 0, 00:09:39.240 "data_size": 0 00:09:39.240 } 00:09:39.240 ] 00:09:39.240 }' 00:09:39.240 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.240 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.500 [2024-11-26 12:52:57.099874] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.500 [2024-11-26 12:52:57.099926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.500 [2024-11-26 12:52:57.107901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.500 [2024-11-26 12:52:57.109722] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.500 [2024-11-26 12:52:57.109762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.500 [2024-11-26 12:52:57.109771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.500 [2024-11-26 12:52:57.109779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.500 [2024-11-26 12:52:57.109785] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:39.500 [2024-11-26 12:52:57.109793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.500 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.500 "name": "Existed_Raid", 00:09:39.500 "uuid": "5213a159-7d15-41ea-8b8e-82a4d1a9fff9", 00:09:39.500 "strip_size_kb": 64, 00:09:39.500 "state": "configuring", 00:09:39.500 "raid_level": "raid0", 00:09:39.500 "superblock": true, 00:09:39.500 "num_base_bdevs": 4, 00:09:39.500 "num_base_bdevs_discovered": 1, 00:09:39.501 "num_base_bdevs_operational": 4, 00:09:39.501 "base_bdevs_list": [ 00:09:39.501 { 00:09:39.501 "name": "BaseBdev1", 00:09:39.501 "uuid": "5547c388-5107-4e6a-a1f8-53e74d17be28", 00:09:39.501 "is_configured": true, 00:09:39.501 "data_offset": 2048, 00:09:39.501 "data_size": 63488 00:09:39.501 }, 00:09:39.501 { 00:09:39.501 "name": "BaseBdev2", 00:09:39.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.501 "is_configured": false, 00:09:39.501 "data_offset": 0, 00:09:39.501 "data_size": 0 00:09:39.501 }, 00:09:39.501 { 00:09:39.501 "name": "BaseBdev3", 00:09:39.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.501 "is_configured": false, 00:09:39.501 "data_offset": 0, 00:09:39.501 "data_size": 0 00:09:39.501 }, 00:09:39.501 { 00:09:39.501 "name": "BaseBdev4", 00:09:39.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.501 "is_configured": false, 00:09:39.501 "data_offset": 0, 00:09:39.501 "data_size": 0 00:09:39.501 } 00:09:39.501 ] 00:09:39.501 }' 00:09:39.501 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.501 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.071 [2024-11-26 12:52:57.569372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.071 BaseBdev2 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.071 [ 00:09:40.071 { 00:09:40.071 "name": "BaseBdev2", 00:09:40.071 "aliases": [ 00:09:40.071 "3cfe770f-d1f4-4b03-b49a-cacad4a913a1" 00:09:40.071 ], 00:09:40.071 "product_name": "Malloc disk", 00:09:40.071 "block_size": 512, 00:09:40.071 "num_blocks": 65536, 00:09:40.071 "uuid": "3cfe770f-d1f4-4b03-b49a-cacad4a913a1", 00:09:40.071 "assigned_rate_limits": { 00:09:40.071 "rw_ios_per_sec": 0, 00:09:40.071 "rw_mbytes_per_sec": 0, 00:09:40.071 "r_mbytes_per_sec": 0, 00:09:40.071 "w_mbytes_per_sec": 0 00:09:40.071 }, 00:09:40.071 "claimed": true, 00:09:40.071 "claim_type": "exclusive_write", 00:09:40.071 "zoned": false, 00:09:40.071 "supported_io_types": { 00:09:40.071 "read": true, 00:09:40.071 "write": true, 00:09:40.071 "unmap": true, 00:09:40.071 "flush": true, 00:09:40.071 "reset": true, 00:09:40.071 "nvme_admin": false, 00:09:40.071 "nvme_io": false, 00:09:40.071 "nvme_io_md": false, 00:09:40.071 "write_zeroes": true, 00:09:40.071 "zcopy": true, 00:09:40.071 "get_zone_info": false, 00:09:40.071 "zone_management": false, 00:09:40.071 "zone_append": false, 00:09:40.071 "compare": false, 00:09:40.071 "compare_and_write": false, 00:09:40.071 "abort": true, 00:09:40.071 "seek_hole": false, 00:09:40.071 "seek_data": false, 00:09:40.071 "copy": true, 00:09:40.071 "nvme_iov_md": false 00:09:40.071 }, 00:09:40.071 "memory_domains": [ 00:09:40.071 { 00:09:40.071 "dma_device_id": "system", 00:09:40.071 "dma_device_type": 1 00:09:40.071 }, 00:09:40.071 { 00:09:40.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.071 "dma_device_type": 2 00:09:40.071 } 00:09:40.071 ], 00:09:40.071 "driver_specific": {} 00:09:40.071 } 00:09:40.071 ] 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.071 "name": "Existed_Raid", 00:09:40.071 "uuid": "5213a159-7d15-41ea-8b8e-82a4d1a9fff9", 00:09:40.071 "strip_size_kb": 64, 00:09:40.071 "state": "configuring", 00:09:40.071 "raid_level": "raid0", 00:09:40.071 "superblock": true, 00:09:40.071 "num_base_bdevs": 4, 00:09:40.071 "num_base_bdevs_discovered": 2, 00:09:40.071 "num_base_bdevs_operational": 4, 00:09:40.071 "base_bdevs_list": [ 00:09:40.071 { 00:09:40.071 "name": "BaseBdev1", 00:09:40.071 "uuid": "5547c388-5107-4e6a-a1f8-53e74d17be28", 00:09:40.071 "is_configured": true, 00:09:40.071 "data_offset": 2048, 00:09:40.071 "data_size": 63488 00:09:40.071 }, 00:09:40.071 { 00:09:40.071 "name": "BaseBdev2", 00:09:40.071 "uuid": "3cfe770f-d1f4-4b03-b49a-cacad4a913a1", 00:09:40.071 "is_configured": true, 00:09:40.071 "data_offset": 2048, 00:09:40.071 "data_size": 63488 00:09:40.071 }, 00:09:40.071 { 00:09:40.071 "name": "BaseBdev3", 00:09:40.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.071 "is_configured": false, 00:09:40.071 "data_offset": 0, 00:09:40.071 "data_size": 0 00:09:40.071 }, 00:09:40.071 { 00:09:40.071 "name": "BaseBdev4", 00:09:40.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.071 "is_configured": false, 00:09:40.071 "data_offset": 0, 00:09:40.071 "data_size": 0 00:09:40.071 } 00:09:40.071 ] 00:09:40.071 }' 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.071 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.643 [2024-11-26 12:52:58.027594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.643 BaseBdev3 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.643 [ 00:09:40.643 { 00:09:40.643 "name": "BaseBdev3", 00:09:40.643 "aliases": [ 00:09:40.643 "0d9fbed6-9ec7-4380-ad03-e7769acbd1a6" 00:09:40.643 ], 00:09:40.643 "product_name": "Malloc disk", 00:09:40.643 "block_size": 512, 00:09:40.643 "num_blocks": 65536, 00:09:40.643 "uuid": "0d9fbed6-9ec7-4380-ad03-e7769acbd1a6", 00:09:40.643 "assigned_rate_limits": { 00:09:40.643 "rw_ios_per_sec": 0, 00:09:40.643 "rw_mbytes_per_sec": 0, 00:09:40.643 "r_mbytes_per_sec": 0, 00:09:40.643 "w_mbytes_per_sec": 0 00:09:40.643 }, 00:09:40.643 "claimed": true, 00:09:40.643 "claim_type": "exclusive_write", 00:09:40.643 "zoned": false, 00:09:40.643 "supported_io_types": { 00:09:40.643 "read": true, 00:09:40.643 "write": true, 00:09:40.643 "unmap": true, 00:09:40.643 "flush": true, 00:09:40.643 "reset": true, 00:09:40.643 "nvme_admin": false, 00:09:40.643 "nvme_io": false, 00:09:40.643 "nvme_io_md": false, 00:09:40.643 "write_zeroes": true, 00:09:40.643 "zcopy": true, 00:09:40.643 "get_zone_info": false, 00:09:40.643 "zone_management": false, 00:09:40.643 "zone_append": false, 00:09:40.643 "compare": false, 00:09:40.643 "compare_and_write": false, 00:09:40.643 "abort": true, 00:09:40.643 "seek_hole": false, 00:09:40.643 "seek_data": false, 00:09:40.643 "copy": true, 00:09:40.643 "nvme_iov_md": false 00:09:40.643 }, 00:09:40.643 "memory_domains": [ 00:09:40.643 { 00:09:40.643 "dma_device_id": "system", 00:09:40.643 "dma_device_type": 1 00:09:40.643 }, 00:09:40.643 { 00:09:40.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.643 "dma_device_type": 2 00:09:40.643 } 00:09:40.643 ], 00:09:40.643 "driver_specific": {} 00:09:40.643 } 00:09:40.643 ] 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.643 "name": "Existed_Raid", 00:09:40.643 "uuid": "5213a159-7d15-41ea-8b8e-82a4d1a9fff9", 00:09:40.643 "strip_size_kb": 64, 00:09:40.643 "state": "configuring", 00:09:40.643 "raid_level": "raid0", 00:09:40.643 "superblock": true, 00:09:40.643 "num_base_bdevs": 4, 00:09:40.643 "num_base_bdevs_discovered": 3, 00:09:40.643 "num_base_bdevs_operational": 4, 00:09:40.643 "base_bdevs_list": [ 00:09:40.643 { 00:09:40.643 "name": "BaseBdev1", 00:09:40.643 "uuid": "5547c388-5107-4e6a-a1f8-53e74d17be28", 00:09:40.643 "is_configured": true, 00:09:40.643 "data_offset": 2048, 00:09:40.643 "data_size": 63488 00:09:40.643 }, 00:09:40.643 { 00:09:40.643 "name": "BaseBdev2", 00:09:40.643 "uuid": "3cfe770f-d1f4-4b03-b49a-cacad4a913a1", 00:09:40.643 "is_configured": true, 00:09:40.643 "data_offset": 2048, 00:09:40.643 "data_size": 63488 00:09:40.643 }, 00:09:40.643 { 00:09:40.643 "name": "BaseBdev3", 00:09:40.643 "uuid": "0d9fbed6-9ec7-4380-ad03-e7769acbd1a6", 00:09:40.643 "is_configured": true, 00:09:40.643 "data_offset": 2048, 00:09:40.643 "data_size": 63488 00:09:40.643 }, 00:09:40.643 { 00:09:40.643 "name": "BaseBdev4", 00:09:40.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.643 "is_configured": false, 00:09:40.643 "data_offset": 0, 00:09:40.643 "data_size": 0 00:09:40.643 } 00:09:40.643 ] 00:09:40.643 }' 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.643 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.904 [2024-11-26 12:52:58.505830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:40.904 [2024-11-26 12:52:58.506025] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:40.904 [2024-11-26 12:52:58.506040] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:40.904 [2024-11-26 12:52:58.506341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:40.904 [2024-11-26 12:52:58.506501] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:40.904 [2024-11-26 12:52:58.506521] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:40.904 BaseBdev4 00:09:40.904 [2024-11-26 12:52:58.506636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:40.904 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.905 [ 00:09:40.905 { 00:09:40.905 "name": "BaseBdev4", 00:09:40.905 "aliases": [ 00:09:40.905 "a11d18b0-5f07-4bc1-bb9c-409af9541bcb" 00:09:40.905 ], 00:09:40.905 "product_name": "Malloc disk", 00:09:40.905 "block_size": 512, 00:09:40.905 "num_blocks": 65536, 00:09:40.905 "uuid": "a11d18b0-5f07-4bc1-bb9c-409af9541bcb", 00:09:40.905 "assigned_rate_limits": { 00:09:40.905 "rw_ios_per_sec": 0, 00:09:40.905 "rw_mbytes_per_sec": 0, 00:09:40.905 "r_mbytes_per_sec": 0, 00:09:40.905 "w_mbytes_per_sec": 0 00:09:40.905 }, 00:09:40.905 "claimed": true, 00:09:40.905 "claim_type": "exclusive_write", 00:09:40.905 "zoned": false, 00:09:40.905 "supported_io_types": { 00:09:40.905 "read": true, 00:09:40.905 "write": true, 00:09:40.905 "unmap": true, 00:09:40.905 "flush": true, 00:09:40.905 "reset": true, 00:09:40.905 "nvme_admin": false, 00:09:40.905 "nvme_io": false, 00:09:40.905 "nvme_io_md": false, 00:09:40.905 "write_zeroes": true, 00:09:40.905 "zcopy": true, 00:09:40.905 "get_zone_info": false, 00:09:40.905 "zone_management": false, 00:09:40.905 "zone_append": false, 00:09:40.905 "compare": false, 00:09:40.905 "compare_and_write": false, 00:09:40.905 "abort": true, 00:09:40.905 "seek_hole": false, 00:09:40.905 "seek_data": false, 00:09:40.905 "copy": true, 00:09:40.905 "nvme_iov_md": false 00:09:40.905 }, 00:09:40.905 "memory_domains": [ 00:09:40.905 { 00:09:40.905 "dma_device_id": "system", 00:09:40.905 "dma_device_type": 1 00:09:40.905 }, 00:09:40.905 { 00:09:40.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.905 "dma_device_type": 2 00:09:40.905 } 00:09:40.905 ], 00:09:40.905 "driver_specific": {} 00:09:40.905 } 00:09:40.905 ] 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.905 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.165 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.165 "name": "Existed_Raid", 00:09:41.165 "uuid": "5213a159-7d15-41ea-8b8e-82a4d1a9fff9", 00:09:41.165 "strip_size_kb": 64, 00:09:41.165 "state": "online", 00:09:41.165 "raid_level": "raid0", 00:09:41.165 "superblock": true, 00:09:41.165 "num_base_bdevs": 4, 00:09:41.165 "num_base_bdevs_discovered": 4, 00:09:41.165 "num_base_bdevs_operational": 4, 00:09:41.165 "base_bdevs_list": [ 00:09:41.165 { 00:09:41.165 "name": "BaseBdev1", 00:09:41.165 "uuid": "5547c388-5107-4e6a-a1f8-53e74d17be28", 00:09:41.165 "is_configured": true, 00:09:41.165 "data_offset": 2048, 00:09:41.165 "data_size": 63488 00:09:41.165 }, 00:09:41.165 { 00:09:41.165 "name": "BaseBdev2", 00:09:41.165 "uuid": "3cfe770f-d1f4-4b03-b49a-cacad4a913a1", 00:09:41.165 "is_configured": true, 00:09:41.165 "data_offset": 2048, 00:09:41.165 "data_size": 63488 00:09:41.165 }, 00:09:41.165 { 00:09:41.165 "name": "BaseBdev3", 00:09:41.165 "uuid": "0d9fbed6-9ec7-4380-ad03-e7769acbd1a6", 00:09:41.165 "is_configured": true, 00:09:41.165 "data_offset": 2048, 00:09:41.165 "data_size": 63488 00:09:41.165 }, 00:09:41.165 { 00:09:41.165 "name": "BaseBdev4", 00:09:41.165 "uuid": "a11d18b0-5f07-4bc1-bb9c-409af9541bcb", 00:09:41.165 "is_configured": true, 00:09:41.165 "data_offset": 2048, 00:09:41.165 "data_size": 63488 00:09:41.165 } 00:09:41.165 ] 00:09:41.165 }' 00:09:41.165 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.165 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.425 [2024-11-26 12:52:58.973405] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.425 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.425 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.425 "name": "Existed_Raid", 00:09:41.425 "aliases": [ 00:09:41.425 "5213a159-7d15-41ea-8b8e-82a4d1a9fff9" 00:09:41.425 ], 00:09:41.425 "product_name": "Raid Volume", 00:09:41.425 "block_size": 512, 00:09:41.425 "num_blocks": 253952, 00:09:41.425 "uuid": "5213a159-7d15-41ea-8b8e-82a4d1a9fff9", 00:09:41.425 "assigned_rate_limits": { 00:09:41.425 "rw_ios_per_sec": 0, 00:09:41.425 "rw_mbytes_per_sec": 0, 00:09:41.425 "r_mbytes_per_sec": 0, 00:09:41.425 "w_mbytes_per_sec": 0 00:09:41.425 }, 00:09:41.425 "claimed": false, 00:09:41.425 "zoned": false, 00:09:41.425 "supported_io_types": { 00:09:41.425 "read": true, 00:09:41.425 "write": true, 00:09:41.425 "unmap": true, 00:09:41.425 "flush": true, 00:09:41.425 "reset": true, 00:09:41.425 "nvme_admin": false, 00:09:41.425 "nvme_io": false, 00:09:41.425 "nvme_io_md": false, 00:09:41.425 "write_zeroes": true, 00:09:41.425 "zcopy": false, 00:09:41.425 "get_zone_info": false, 00:09:41.425 "zone_management": false, 00:09:41.425 "zone_append": false, 00:09:41.425 "compare": false, 00:09:41.425 "compare_and_write": false, 00:09:41.425 "abort": false, 00:09:41.425 "seek_hole": false, 00:09:41.425 "seek_data": false, 00:09:41.425 "copy": false, 00:09:41.425 "nvme_iov_md": false 00:09:41.425 }, 00:09:41.425 "memory_domains": [ 00:09:41.425 { 00:09:41.425 "dma_device_id": "system", 00:09:41.425 "dma_device_type": 1 00:09:41.425 }, 00:09:41.425 { 00:09:41.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.425 "dma_device_type": 2 00:09:41.425 }, 00:09:41.425 { 00:09:41.425 "dma_device_id": "system", 00:09:41.425 "dma_device_type": 1 00:09:41.425 }, 00:09:41.425 { 00:09:41.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.425 "dma_device_type": 2 00:09:41.425 }, 00:09:41.425 { 00:09:41.425 "dma_device_id": "system", 00:09:41.425 "dma_device_type": 1 00:09:41.425 }, 00:09:41.425 { 00:09:41.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.425 "dma_device_type": 2 00:09:41.425 }, 00:09:41.425 { 00:09:41.425 "dma_device_id": "system", 00:09:41.425 "dma_device_type": 1 00:09:41.425 }, 00:09:41.425 { 00:09:41.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.425 "dma_device_type": 2 00:09:41.425 } 00:09:41.425 ], 00:09:41.425 "driver_specific": { 00:09:41.425 "raid": { 00:09:41.425 "uuid": "5213a159-7d15-41ea-8b8e-82a4d1a9fff9", 00:09:41.425 "strip_size_kb": 64, 00:09:41.425 "state": "online", 00:09:41.425 "raid_level": "raid0", 00:09:41.425 "superblock": true, 00:09:41.425 "num_base_bdevs": 4, 00:09:41.425 "num_base_bdevs_discovered": 4, 00:09:41.425 "num_base_bdevs_operational": 4, 00:09:41.425 "base_bdevs_list": [ 00:09:41.425 { 00:09:41.425 "name": "BaseBdev1", 00:09:41.425 "uuid": "5547c388-5107-4e6a-a1f8-53e74d17be28", 00:09:41.425 "is_configured": true, 00:09:41.425 "data_offset": 2048, 00:09:41.425 "data_size": 63488 00:09:41.425 }, 00:09:41.426 { 00:09:41.426 "name": "BaseBdev2", 00:09:41.426 "uuid": "3cfe770f-d1f4-4b03-b49a-cacad4a913a1", 00:09:41.426 "is_configured": true, 00:09:41.426 "data_offset": 2048, 00:09:41.426 "data_size": 63488 00:09:41.426 }, 00:09:41.426 { 00:09:41.426 "name": "BaseBdev3", 00:09:41.426 "uuid": "0d9fbed6-9ec7-4380-ad03-e7769acbd1a6", 00:09:41.426 "is_configured": true, 00:09:41.426 "data_offset": 2048, 00:09:41.426 "data_size": 63488 00:09:41.426 }, 00:09:41.426 { 00:09:41.426 "name": "BaseBdev4", 00:09:41.426 "uuid": "a11d18b0-5f07-4bc1-bb9c-409af9541bcb", 00:09:41.426 "is_configured": true, 00:09:41.426 "data_offset": 2048, 00:09:41.426 "data_size": 63488 00:09:41.426 } 00:09:41.426 ] 00:09:41.426 } 00:09:41.426 } 00:09:41.426 }' 00:09:41.426 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.426 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:41.426 BaseBdev2 00:09:41.426 BaseBdev3 00:09:41.426 BaseBdev4' 00:09:41.426 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.426 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.426 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.426 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.426 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:41.426 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.426 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 [2024-11-26 12:52:59.284562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.686 [2024-11-26 12:52:59.284592] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.686 [2024-11-26 12:52:59.284637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.686 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.686 "name": "Existed_Raid", 00:09:41.686 "uuid": "5213a159-7d15-41ea-8b8e-82a4d1a9fff9", 00:09:41.686 "strip_size_kb": 64, 00:09:41.686 "state": "offline", 00:09:41.686 "raid_level": "raid0", 00:09:41.686 "superblock": true, 00:09:41.686 "num_base_bdevs": 4, 00:09:41.686 "num_base_bdevs_discovered": 3, 00:09:41.686 "num_base_bdevs_operational": 3, 00:09:41.686 "base_bdevs_list": [ 00:09:41.686 { 00:09:41.686 "name": null, 00:09:41.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.686 "is_configured": false, 00:09:41.686 "data_offset": 0, 00:09:41.686 "data_size": 63488 00:09:41.686 }, 00:09:41.686 { 00:09:41.686 "name": "BaseBdev2", 00:09:41.686 "uuid": "3cfe770f-d1f4-4b03-b49a-cacad4a913a1", 00:09:41.686 "is_configured": true, 00:09:41.686 "data_offset": 2048, 00:09:41.686 "data_size": 63488 00:09:41.686 }, 00:09:41.686 { 00:09:41.686 "name": "BaseBdev3", 00:09:41.687 "uuid": "0d9fbed6-9ec7-4380-ad03-e7769acbd1a6", 00:09:41.687 "is_configured": true, 00:09:41.687 "data_offset": 2048, 00:09:41.687 "data_size": 63488 00:09:41.687 }, 00:09:41.687 { 00:09:41.687 "name": "BaseBdev4", 00:09:41.687 "uuid": "a11d18b0-5f07-4bc1-bb9c-409af9541bcb", 00:09:41.687 "is_configured": true, 00:09:41.687 "data_offset": 2048, 00:09:41.687 "data_size": 63488 00:09:41.687 } 00:09:41.687 ] 00:09:41.687 }' 00:09:41.687 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.687 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.256 [2024-11-26 12:52:59.727137] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.256 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.257 [2024-11-26 12:52:59.794090] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.257 [2024-11-26 12:52:59.861095] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:42.257 [2024-11-26 12:52:59.861138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.257 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 BaseBdev2 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 [ 00:09:42.525 { 00:09:42.525 "name": "BaseBdev2", 00:09:42.525 "aliases": [ 00:09:42.525 "36fec8e9-bf47-4d8c-9a0c-482dd97b8944" 00:09:42.525 ], 00:09:42.525 "product_name": "Malloc disk", 00:09:42.525 "block_size": 512, 00:09:42.525 "num_blocks": 65536, 00:09:42.525 "uuid": "36fec8e9-bf47-4d8c-9a0c-482dd97b8944", 00:09:42.525 "assigned_rate_limits": { 00:09:42.525 "rw_ios_per_sec": 0, 00:09:42.525 "rw_mbytes_per_sec": 0, 00:09:42.525 "r_mbytes_per_sec": 0, 00:09:42.525 "w_mbytes_per_sec": 0 00:09:42.525 }, 00:09:42.525 "claimed": false, 00:09:42.525 "zoned": false, 00:09:42.525 "supported_io_types": { 00:09:42.525 "read": true, 00:09:42.525 "write": true, 00:09:42.525 "unmap": true, 00:09:42.525 "flush": true, 00:09:42.525 "reset": true, 00:09:42.525 "nvme_admin": false, 00:09:42.525 "nvme_io": false, 00:09:42.525 "nvme_io_md": false, 00:09:42.525 "write_zeroes": true, 00:09:42.525 "zcopy": true, 00:09:42.525 "get_zone_info": false, 00:09:42.525 "zone_management": false, 00:09:42.525 "zone_append": false, 00:09:42.525 "compare": false, 00:09:42.525 "compare_and_write": false, 00:09:42.525 "abort": true, 00:09:42.525 "seek_hole": false, 00:09:42.525 "seek_data": false, 00:09:42.525 "copy": true, 00:09:42.525 "nvme_iov_md": false 00:09:42.525 }, 00:09:42.525 "memory_domains": [ 00:09:42.525 { 00:09:42.525 "dma_device_id": "system", 00:09:42.525 "dma_device_type": 1 00:09:42.525 }, 00:09:42.525 { 00:09:42.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.525 "dma_device_type": 2 00:09:42.525 } 00:09:42.525 ], 00:09:42.525 "driver_specific": {} 00:09:42.525 } 00:09:42.525 ] 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 BaseBdev3 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.525 12:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 [ 00:09:42.525 { 00:09:42.525 "name": "BaseBdev3", 00:09:42.525 "aliases": [ 00:09:42.525 "91a912e7-4120-4ef9-9863-4e376ec06238" 00:09:42.525 ], 00:09:42.525 "product_name": "Malloc disk", 00:09:42.525 "block_size": 512, 00:09:42.525 "num_blocks": 65536, 00:09:42.525 "uuid": "91a912e7-4120-4ef9-9863-4e376ec06238", 00:09:42.525 "assigned_rate_limits": { 00:09:42.525 "rw_ios_per_sec": 0, 00:09:42.525 "rw_mbytes_per_sec": 0, 00:09:42.525 "r_mbytes_per_sec": 0, 00:09:42.525 "w_mbytes_per_sec": 0 00:09:42.525 }, 00:09:42.525 "claimed": false, 00:09:42.525 "zoned": false, 00:09:42.525 "supported_io_types": { 00:09:42.525 "read": true, 00:09:42.525 "write": true, 00:09:42.525 "unmap": true, 00:09:42.525 "flush": true, 00:09:42.525 "reset": true, 00:09:42.525 "nvme_admin": false, 00:09:42.525 "nvme_io": false, 00:09:42.525 "nvme_io_md": false, 00:09:42.525 "write_zeroes": true, 00:09:42.525 "zcopy": true, 00:09:42.525 "get_zone_info": false, 00:09:42.525 "zone_management": false, 00:09:42.525 "zone_append": false, 00:09:42.525 "compare": false, 00:09:42.525 "compare_and_write": false, 00:09:42.525 "abort": true, 00:09:42.525 "seek_hole": false, 00:09:42.525 "seek_data": false, 00:09:42.525 "copy": true, 00:09:42.525 "nvme_iov_md": false 00:09:42.525 }, 00:09:42.525 "memory_domains": [ 00:09:42.525 { 00:09:42.525 "dma_device_id": "system", 00:09:42.525 "dma_device_type": 1 00:09:42.525 }, 00:09:42.525 { 00:09:42.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.525 "dma_device_type": 2 00:09:42.525 } 00:09:42.525 ], 00:09:42.525 "driver_specific": {} 00:09:42.525 } 00:09:42.525 ] 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 BaseBdev4 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 [ 00:09:42.525 { 00:09:42.525 "name": "BaseBdev4", 00:09:42.525 "aliases": [ 00:09:42.525 "05346068-3643-4fc3-ad77-28aee00d42b9" 00:09:42.525 ], 00:09:42.525 "product_name": "Malloc disk", 00:09:42.525 "block_size": 512, 00:09:42.525 "num_blocks": 65536, 00:09:42.525 "uuid": "05346068-3643-4fc3-ad77-28aee00d42b9", 00:09:42.525 "assigned_rate_limits": { 00:09:42.525 "rw_ios_per_sec": 0, 00:09:42.525 "rw_mbytes_per_sec": 0, 00:09:42.525 "r_mbytes_per_sec": 0, 00:09:42.525 "w_mbytes_per_sec": 0 00:09:42.525 }, 00:09:42.525 "claimed": false, 00:09:42.525 "zoned": false, 00:09:42.525 "supported_io_types": { 00:09:42.525 "read": true, 00:09:42.525 "write": true, 00:09:42.525 "unmap": true, 00:09:42.525 "flush": true, 00:09:42.525 "reset": true, 00:09:42.525 "nvme_admin": false, 00:09:42.525 "nvme_io": false, 00:09:42.525 "nvme_io_md": false, 00:09:42.525 "write_zeroes": true, 00:09:42.525 "zcopy": true, 00:09:42.525 "get_zone_info": false, 00:09:42.525 "zone_management": false, 00:09:42.525 "zone_append": false, 00:09:42.525 "compare": false, 00:09:42.525 "compare_and_write": false, 00:09:42.525 "abort": true, 00:09:42.525 "seek_hole": false, 00:09:42.525 "seek_data": false, 00:09:42.525 "copy": true, 00:09:42.525 "nvme_iov_md": false 00:09:42.525 }, 00:09:42.525 "memory_domains": [ 00:09:42.525 { 00:09:42.525 "dma_device_id": "system", 00:09:42.525 "dma_device_type": 1 00:09:42.525 }, 00:09:42.525 { 00:09:42.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.525 "dma_device_type": 2 00:09:42.525 } 00:09:42.525 ], 00:09:42.525 "driver_specific": {} 00:09:42.525 } 00:09:42.525 ] 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.525 [2024-11-26 12:53:00.088945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.525 [2024-11-26 12:53:00.088990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.525 [2024-11-26 12:53:00.089028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.525 [2024-11-26 12:53:00.090949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.525 [2024-11-26 12:53:00.091002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.525 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.526 "name": "Existed_Raid", 00:09:42.526 "uuid": "92b20ddd-43f3-4fa0-9d60-a32bea60ed85", 00:09:42.526 "strip_size_kb": 64, 00:09:42.526 "state": "configuring", 00:09:42.526 "raid_level": "raid0", 00:09:42.526 "superblock": true, 00:09:42.526 "num_base_bdevs": 4, 00:09:42.526 "num_base_bdevs_discovered": 3, 00:09:42.526 "num_base_bdevs_operational": 4, 00:09:42.526 "base_bdevs_list": [ 00:09:42.526 { 00:09:42.526 "name": "BaseBdev1", 00:09:42.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.526 "is_configured": false, 00:09:42.526 "data_offset": 0, 00:09:42.526 "data_size": 0 00:09:42.526 }, 00:09:42.526 { 00:09:42.526 "name": "BaseBdev2", 00:09:42.526 "uuid": "36fec8e9-bf47-4d8c-9a0c-482dd97b8944", 00:09:42.526 "is_configured": true, 00:09:42.526 "data_offset": 2048, 00:09:42.526 "data_size": 63488 00:09:42.526 }, 00:09:42.526 { 00:09:42.526 "name": "BaseBdev3", 00:09:42.526 "uuid": "91a912e7-4120-4ef9-9863-4e376ec06238", 00:09:42.526 "is_configured": true, 00:09:42.526 "data_offset": 2048, 00:09:42.526 "data_size": 63488 00:09:42.526 }, 00:09:42.526 { 00:09:42.526 "name": "BaseBdev4", 00:09:42.526 "uuid": "05346068-3643-4fc3-ad77-28aee00d42b9", 00:09:42.526 "is_configured": true, 00:09:42.526 "data_offset": 2048, 00:09:42.526 "data_size": 63488 00:09:42.526 } 00:09:42.526 ] 00:09:42.526 }' 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.526 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.112 [2024-11-26 12:53:00.524169] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.112 "name": "Existed_Raid", 00:09:43.112 "uuid": "92b20ddd-43f3-4fa0-9d60-a32bea60ed85", 00:09:43.112 "strip_size_kb": 64, 00:09:43.112 "state": "configuring", 00:09:43.112 "raid_level": "raid0", 00:09:43.112 "superblock": true, 00:09:43.112 "num_base_bdevs": 4, 00:09:43.112 "num_base_bdevs_discovered": 2, 00:09:43.112 "num_base_bdevs_operational": 4, 00:09:43.112 "base_bdevs_list": [ 00:09:43.112 { 00:09:43.112 "name": "BaseBdev1", 00:09:43.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.112 "is_configured": false, 00:09:43.112 "data_offset": 0, 00:09:43.112 "data_size": 0 00:09:43.112 }, 00:09:43.112 { 00:09:43.112 "name": null, 00:09:43.112 "uuid": "36fec8e9-bf47-4d8c-9a0c-482dd97b8944", 00:09:43.112 "is_configured": false, 00:09:43.112 "data_offset": 0, 00:09:43.112 "data_size": 63488 00:09:43.112 }, 00:09:43.112 { 00:09:43.112 "name": "BaseBdev3", 00:09:43.112 "uuid": "91a912e7-4120-4ef9-9863-4e376ec06238", 00:09:43.112 "is_configured": true, 00:09:43.112 "data_offset": 2048, 00:09:43.112 "data_size": 63488 00:09:43.112 }, 00:09:43.112 { 00:09:43.112 "name": "BaseBdev4", 00:09:43.112 "uuid": "05346068-3643-4fc3-ad77-28aee00d42b9", 00:09:43.112 "is_configured": true, 00:09:43.112 "data_offset": 2048, 00:09:43.112 "data_size": 63488 00:09:43.112 } 00:09:43.112 ] 00:09:43.112 }' 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.112 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.373 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.373 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:43.373 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.373 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:43.373 12:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:43.373 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.373 12:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 [2024-11-26 12:53:01.002313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.373 BaseBdev1 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.373 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 [ 00:09:43.373 { 00:09:43.373 "name": "BaseBdev1", 00:09:43.373 "aliases": [ 00:09:43.373 "0b8e244b-3f54-4d6b-8ad8-9882939011b2" 00:09:43.373 ], 00:09:43.373 "product_name": "Malloc disk", 00:09:43.373 "block_size": 512, 00:09:43.373 "num_blocks": 65536, 00:09:43.373 "uuid": "0b8e244b-3f54-4d6b-8ad8-9882939011b2", 00:09:43.373 "assigned_rate_limits": { 00:09:43.373 "rw_ios_per_sec": 0, 00:09:43.373 "rw_mbytes_per_sec": 0, 00:09:43.373 "r_mbytes_per_sec": 0, 00:09:43.373 "w_mbytes_per_sec": 0 00:09:43.373 }, 00:09:43.373 "claimed": true, 00:09:43.373 "claim_type": "exclusive_write", 00:09:43.373 "zoned": false, 00:09:43.373 "supported_io_types": { 00:09:43.373 "read": true, 00:09:43.373 "write": true, 00:09:43.373 "unmap": true, 00:09:43.373 "flush": true, 00:09:43.373 "reset": true, 00:09:43.373 "nvme_admin": false, 00:09:43.373 "nvme_io": false, 00:09:43.373 "nvme_io_md": false, 00:09:43.373 "write_zeroes": true, 00:09:43.373 "zcopy": true, 00:09:43.373 "get_zone_info": false, 00:09:43.373 "zone_management": false, 00:09:43.373 "zone_append": false, 00:09:43.373 "compare": false, 00:09:43.373 "compare_and_write": false, 00:09:43.373 "abort": true, 00:09:43.373 "seek_hole": false, 00:09:43.373 "seek_data": false, 00:09:43.373 "copy": true, 00:09:43.373 "nvme_iov_md": false 00:09:43.373 }, 00:09:43.373 "memory_domains": [ 00:09:43.373 { 00:09:43.373 "dma_device_id": "system", 00:09:43.373 "dma_device_type": 1 00:09:43.373 }, 00:09:43.373 { 00:09:43.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.373 "dma_device_type": 2 00:09:43.373 } 00:09:43.373 ], 00:09:43.374 "driver_specific": {} 00:09:43.374 } 00:09:43.374 ] 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.374 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.634 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.634 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.634 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.634 "name": "Existed_Raid", 00:09:43.634 "uuid": "92b20ddd-43f3-4fa0-9d60-a32bea60ed85", 00:09:43.634 "strip_size_kb": 64, 00:09:43.634 "state": "configuring", 00:09:43.634 "raid_level": "raid0", 00:09:43.634 "superblock": true, 00:09:43.634 "num_base_bdevs": 4, 00:09:43.634 "num_base_bdevs_discovered": 3, 00:09:43.634 "num_base_bdevs_operational": 4, 00:09:43.634 "base_bdevs_list": [ 00:09:43.634 { 00:09:43.634 "name": "BaseBdev1", 00:09:43.634 "uuid": "0b8e244b-3f54-4d6b-8ad8-9882939011b2", 00:09:43.634 "is_configured": true, 00:09:43.634 "data_offset": 2048, 00:09:43.634 "data_size": 63488 00:09:43.634 }, 00:09:43.634 { 00:09:43.634 "name": null, 00:09:43.634 "uuid": "36fec8e9-bf47-4d8c-9a0c-482dd97b8944", 00:09:43.634 "is_configured": false, 00:09:43.634 "data_offset": 0, 00:09:43.634 "data_size": 63488 00:09:43.634 }, 00:09:43.634 { 00:09:43.634 "name": "BaseBdev3", 00:09:43.634 "uuid": "91a912e7-4120-4ef9-9863-4e376ec06238", 00:09:43.634 "is_configured": true, 00:09:43.634 "data_offset": 2048, 00:09:43.634 "data_size": 63488 00:09:43.634 }, 00:09:43.634 { 00:09:43.634 "name": "BaseBdev4", 00:09:43.634 "uuid": "05346068-3643-4fc3-ad77-28aee00d42b9", 00:09:43.634 "is_configured": true, 00:09:43.634 "data_offset": 2048, 00:09:43.634 "data_size": 63488 00:09:43.634 } 00:09:43.634 ] 00:09:43.634 }' 00:09:43.634 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.634 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.894 [2024-11-26 12:53:01.525427] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.894 "name": "Existed_Raid", 00:09:43.894 "uuid": "92b20ddd-43f3-4fa0-9d60-a32bea60ed85", 00:09:43.894 "strip_size_kb": 64, 00:09:43.894 "state": "configuring", 00:09:43.894 "raid_level": "raid0", 00:09:43.894 "superblock": true, 00:09:43.894 "num_base_bdevs": 4, 00:09:43.894 "num_base_bdevs_discovered": 2, 00:09:43.894 "num_base_bdevs_operational": 4, 00:09:43.894 "base_bdevs_list": [ 00:09:43.894 { 00:09:43.894 "name": "BaseBdev1", 00:09:43.894 "uuid": "0b8e244b-3f54-4d6b-8ad8-9882939011b2", 00:09:43.894 "is_configured": true, 00:09:43.894 "data_offset": 2048, 00:09:43.894 "data_size": 63488 00:09:43.894 }, 00:09:43.894 { 00:09:43.894 "name": null, 00:09:43.894 "uuid": "36fec8e9-bf47-4d8c-9a0c-482dd97b8944", 00:09:43.894 "is_configured": false, 00:09:43.894 "data_offset": 0, 00:09:43.894 "data_size": 63488 00:09:43.894 }, 00:09:43.894 { 00:09:43.894 "name": null, 00:09:43.894 "uuid": "91a912e7-4120-4ef9-9863-4e376ec06238", 00:09:43.894 "is_configured": false, 00:09:43.894 "data_offset": 0, 00:09:43.894 "data_size": 63488 00:09:43.894 }, 00:09:43.894 { 00:09:43.894 "name": "BaseBdev4", 00:09:43.894 "uuid": "05346068-3643-4fc3-ad77-28aee00d42b9", 00:09:43.894 "is_configured": true, 00:09:43.894 "data_offset": 2048, 00:09:43.894 "data_size": 63488 00:09:43.894 } 00:09:43.894 ] 00:09:43.894 }' 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.894 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.465 [2024-11-26 12:53:01.964718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.465 12:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.465 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.465 "name": "Existed_Raid", 00:09:44.465 "uuid": "92b20ddd-43f3-4fa0-9d60-a32bea60ed85", 00:09:44.465 "strip_size_kb": 64, 00:09:44.465 "state": "configuring", 00:09:44.465 "raid_level": "raid0", 00:09:44.465 "superblock": true, 00:09:44.465 "num_base_bdevs": 4, 00:09:44.465 "num_base_bdevs_discovered": 3, 00:09:44.465 "num_base_bdevs_operational": 4, 00:09:44.465 "base_bdevs_list": [ 00:09:44.465 { 00:09:44.465 "name": "BaseBdev1", 00:09:44.465 "uuid": "0b8e244b-3f54-4d6b-8ad8-9882939011b2", 00:09:44.465 "is_configured": true, 00:09:44.465 "data_offset": 2048, 00:09:44.465 "data_size": 63488 00:09:44.465 }, 00:09:44.465 { 00:09:44.465 "name": null, 00:09:44.465 "uuid": "36fec8e9-bf47-4d8c-9a0c-482dd97b8944", 00:09:44.465 "is_configured": false, 00:09:44.465 "data_offset": 0, 00:09:44.465 "data_size": 63488 00:09:44.465 }, 00:09:44.465 { 00:09:44.465 "name": "BaseBdev3", 00:09:44.465 "uuid": "91a912e7-4120-4ef9-9863-4e376ec06238", 00:09:44.465 "is_configured": true, 00:09:44.465 "data_offset": 2048, 00:09:44.465 "data_size": 63488 00:09:44.465 }, 00:09:44.465 { 00:09:44.465 "name": "BaseBdev4", 00:09:44.465 "uuid": "05346068-3643-4fc3-ad77-28aee00d42b9", 00:09:44.465 "is_configured": true, 00:09:44.465 "data_offset": 2048, 00:09:44.465 "data_size": 63488 00:09:44.465 } 00:09:44.465 ] 00:09:44.465 }' 00:09:44.465 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.465 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.725 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.725 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.725 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:44.725 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.725 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.985 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:44.985 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:44.985 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.985 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.985 [2024-11-26 12:53:02.407966] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.985 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.986 "name": "Existed_Raid", 00:09:44.986 "uuid": "92b20ddd-43f3-4fa0-9d60-a32bea60ed85", 00:09:44.986 "strip_size_kb": 64, 00:09:44.986 "state": "configuring", 00:09:44.986 "raid_level": "raid0", 00:09:44.986 "superblock": true, 00:09:44.986 "num_base_bdevs": 4, 00:09:44.986 "num_base_bdevs_discovered": 2, 00:09:44.986 "num_base_bdevs_operational": 4, 00:09:44.986 "base_bdevs_list": [ 00:09:44.986 { 00:09:44.986 "name": null, 00:09:44.986 "uuid": "0b8e244b-3f54-4d6b-8ad8-9882939011b2", 00:09:44.986 "is_configured": false, 00:09:44.986 "data_offset": 0, 00:09:44.986 "data_size": 63488 00:09:44.986 }, 00:09:44.986 { 00:09:44.986 "name": null, 00:09:44.986 "uuid": "36fec8e9-bf47-4d8c-9a0c-482dd97b8944", 00:09:44.986 "is_configured": false, 00:09:44.986 "data_offset": 0, 00:09:44.986 "data_size": 63488 00:09:44.986 }, 00:09:44.986 { 00:09:44.986 "name": "BaseBdev3", 00:09:44.986 "uuid": "91a912e7-4120-4ef9-9863-4e376ec06238", 00:09:44.986 "is_configured": true, 00:09:44.986 "data_offset": 2048, 00:09:44.986 "data_size": 63488 00:09:44.986 }, 00:09:44.986 { 00:09:44.986 "name": "BaseBdev4", 00:09:44.986 "uuid": "05346068-3643-4fc3-ad77-28aee00d42b9", 00:09:44.986 "is_configured": true, 00:09:44.986 "data_offset": 2048, 00:09:44.986 "data_size": 63488 00:09:44.986 } 00:09:44.986 ] 00:09:44.986 }' 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.986 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.246 [2024-11-26 12:53:02.877622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.246 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.506 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.506 "name": "Existed_Raid", 00:09:45.506 "uuid": "92b20ddd-43f3-4fa0-9d60-a32bea60ed85", 00:09:45.506 "strip_size_kb": 64, 00:09:45.506 "state": "configuring", 00:09:45.506 "raid_level": "raid0", 00:09:45.506 "superblock": true, 00:09:45.506 "num_base_bdevs": 4, 00:09:45.506 "num_base_bdevs_discovered": 3, 00:09:45.506 "num_base_bdevs_operational": 4, 00:09:45.506 "base_bdevs_list": [ 00:09:45.506 { 00:09:45.506 "name": null, 00:09:45.506 "uuid": "0b8e244b-3f54-4d6b-8ad8-9882939011b2", 00:09:45.506 "is_configured": false, 00:09:45.506 "data_offset": 0, 00:09:45.506 "data_size": 63488 00:09:45.506 }, 00:09:45.506 { 00:09:45.506 "name": "BaseBdev2", 00:09:45.506 "uuid": "36fec8e9-bf47-4d8c-9a0c-482dd97b8944", 00:09:45.506 "is_configured": true, 00:09:45.506 "data_offset": 2048, 00:09:45.506 "data_size": 63488 00:09:45.506 }, 00:09:45.506 { 00:09:45.506 "name": "BaseBdev3", 00:09:45.506 "uuid": "91a912e7-4120-4ef9-9863-4e376ec06238", 00:09:45.506 "is_configured": true, 00:09:45.506 "data_offset": 2048, 00:09:45.506 "data_size": 63488 00:09:45.506 }, 00:09:45.506 { 00:09:45.506 "name": "BaseBdev4", 00:09:45.506 "uuid": "05346068-3643-4fc3-ad77-28aee00d42b9", 00:09:45.506 "is_configured": true, 00:09:45.506 "data_offset": 2048, 00:09:45.506 "data_size": 63488 00:09:45.506 } 00:09:45.506 ] 00:09:45.506 }' 00:09:45.506 12:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.506 12:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0b8e244b-3f54-4d6b-8ad8-9882939011b2 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.765 NewBaseBdev 00:09:45.765 [2024-11-26 12:53:03.339701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:45.765 [2024-11-26 12:53:03.339873] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:45.765 [2024-11-26 12:53:03.339886] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:45.765 [2024-11-26 12:53:03.340124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:45.765 [2024-11-26 12:53:03.340246] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:45.765 [2024-11-26 12:53:03.340259] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:45.765 [2024-11-26 12:53:03.340349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.765 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 [ 00:09:45.766 { 00:09:45.766 "name": "NewBaseBdev", 00:09:45.766 "aliases": [ 00:09:45.766 "0b8e244b-3f54-4d6b-8ad8-9882939011b2" 00:09:45.766 ], 00:09:45.766 "product_name": "Malloc disk", 00:09:45.766 "block_size": 512, 00:09:45.766 "num_blocks": 65536, 00:09:45.766 "uuid": "0b8e244b-3f54-4d6b-8ad8-9882939011b2", 00:09:45.766 "assigned_rate_limits": { 00:09:45.766 "rw_ios_per_sec": 0, 00:09:45.766 "rw_mbytes_per_sec": 0, 00:09:45.766 "r_mbytes_per_sec": 0, 00:09:45.766 "w_mbytes_per_sec": 0 00:09:45.766 }, 00:09:45.766 "claimed": true, 00:09:45.766 "claim_type": "exclusive_write", 00:09:45.766 "zoned": false, 00:09:45.766 "supported_io_types": { 00:09:45.766 "read": true, 00:09:45.766 "write": true, 00:09:45.766 "unmap": true, 00:09:45.766 "flush": true, 00:09:45.766 "reset": true, 00:09:45.766 "nvme_admin": false, 00:09:45.766 "nvme_io": false, 00:09:45.766 "nvme_io_md": false, 00:09:45.766 "write_zeroes": true, 00:09:45.766 "zcopy": true, 00:09:45.766 "get_zone_info": false, 00:09:45.766 "zone_management": false, 00:09:45.766 "zone_append": false, 00:09:45.766 "compare": false, 00:09:45.766 "compare_and_write": false, 00:09:45.766 "abort": true, 00:09:45.766 "seek_hole": false, 00:09:45.766 "seek_data": false, 00:09:45.766 "copy": true, 00:09:45.766 "nvme_iov_md": false 00:09:45.766 }, 00:09:45.766 "memory_domains": [ 00:09:45.766 { 00:09:45.766 "dma_device_id": "system", 00:09:45.766 "dma_device_type": 1 00:09:45.766 }, 00:09:45.766 { 00:09:45.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.766 "dma_device_type": 2 00:09:45.766 } 00:09:45.766 ], 00:09:45.766 "driver_specific": {} 00:09:45.766 } 00:09:45.766 ] 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.766 "name": "Existed_Raid", 00:09:45.766 "uuid": "92b20ddd-43f3-4fa0-9d60-a32bea60ed85", 00:09:45.766 "strip_size_kb": 64, 00:09:45.766 "state": "online", 00:09:45.766 "raid_level": "raid0", 00:09:45.766 "superblock": true, 00:09:45.766 "num_base_bdevs": 4, 00:09:45.766 "num_base_bdevs_discovered": 4, 00:09:45.766 "num_base_bdevs_operational": 4, 00:09:45.766 "base_bdevs_list": [ 00:09:45.766 { 00:09:45.766 "name": "NewBaseBdev", 00:09:45.766 "uuid": "0b8e244b-3f54-4d6b-8ad8-9882939011b2", 00:09:45.766 "is_configured": true, 00:09:45.766 "data_offset": 2048, 00:09:45.766 "data_size": 63488 00:09:45.766 }, 00:09:45.766 { 00:09:45.766 "name": "BaseBdev2", 00:09:45.766 "uuid": "36fec8e9-bf47-4d8c-9a0c-482dd97b8944", 00:09:45.766 "is_configured": true, 00:09:45.766 "data_offset": 2048, 00:09:45.766 "data_size": 63488 00:09:45.766 }, 00:09:45.766 { 00:09:45.766 "name": "BaseBdev3", 00:09:45.766 "uuid": "91a912e7-4120-4ef9-9863-4e376ec06238", 00:09:45.766 "is_configured": true, 00:09:45.766 "data_offset": 2048, 00:09:45.766 "data_size": 63488 00:09:45.766 }, 00:09:45.766 { 00:09:45.766 "name": "BaseBdev4", 00:09:45.766 "uuid": "05346068-3643-4fc3-ad77-28aee00d42b9", 00:09:45.766 "is_configured": true, 00:09:45.766 "data_offset": 2048, 00:09:45.766 "data_size": 63488 00:09:45.766 } 00:09:45.766 ] 00:09:45.766 }' 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.766 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.337 [2024-11-26 12:53:03.807237] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.337 "name": "Existed_Raid", 00:09:46.337 "aliases": [ 00:09:46.337 "92b20ddd-43f3-4fa0-9d60-a32bea60ed85" 00:09:46.337 ], 00:09:46.337 "product_name": "Raid Volume", 00:09:46.337 "block_size": 512, 00:09:46.337 "num_blocks": 253952, 00:09:46.337 "uuid": "92b20ddd-43f3-4fa0-9d60-a32bea60ed85", 00:09:46.337 "assigned_rate_limits": { 00:09:46.337 "rw_ios_per_sec": 0, 00:09:46.337 "rw_mbytes_per_sec": 0, 00:09:46.337 "r_mbytes_per_sec": 0, 00:09:46.337 "w_mbytes_per_sec": 0 00:09:46.337 }, 00:09:46.337 "claimed": false, 00:09:46.337 "zoned": false, 00:09:46.337 "supported_io_types": { 00:09:46.337 "read": true, 00:09:46.337 "write": true, 00:09:46.337 "unmap": true, 00:09:46.337 "flush": true, 00:09:46.337 "reset": true, 00:09:46.337 "nvme_admin": false, 00:09:46.337 "nvme_io": false, 00:09:46.337 "nvme_io_md": false, 00:09:46.337 "write_zeroes": true, 00:09:46.337 "zcopy": false, 00:09:46.337 "get_zone_info": false, 00:09:46.337 "zone_management": false, 00:09:46.337 "zone_append": false, 00:09:46.337 "compare": false, 00:09:46.337 "compare_and_write": false, 00:09:46.337 "abort": false, 00:09:46.337 "seek_hole": false, 00:09:46.337 "seek_data": false, 00:09:46.337 "copy": false, 00:09:46.337 "nvme_iov_md": false 00:09:46.337 }, 00:09:46.337 "memory_domains": [ 00:09:46.337 { 00:09:46.337 "dma_device_id": "system", 00:09:46.337 "dma_device_type": 1 00:09:46.337 }, 00:09:46.337 { 00:09:46.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.337 "dma_device_type": 2 00:09:46.337 }, 00:09:46.337 { 00:09:46.337 "dma_device_id": "system", 00:09:46.337 "dma_device_type": 1 00:09:46.337 }, 00:09:46.337 { 00:09:46.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.337 "dma_device_type": 2 00:09:46.337 }, 00:09:46.337 { 00:09:46.337 "dma_device_id": "system", 00:09:46.337 "dma_device_type": 1 00:09:46.337 }, 00:09:46.337 { 00:09:46.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.337 "dma_device_type": 2 00:09:46.337 }, 00:09:46.337 { 00:09:46.337 "dma_device_id": "system", 00:09:46.337 "dma_device_type": 1 00:09:46.337 }, 00:09:46.337 { 00:09:46.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.337 "dma_device_type": 2 00:09:46.337 } 00:09:46.337 ], 00:09:46.337 "driver_specific": { 00:09:46.337 "raid": { 00:09:46.337 "uuid": "92b20ddd-43f3-4fa0-9d60-a32bea60ed85", 00:09:46.337 "strip_size_kb": 64, 00:09:46.337 "state": "online", 00:09:46.337 "raid_level": "raid0", 00:09:46.337 "superblock": true, 00:09:46.337 "num_base_bdevs": 4, 00:09:46.337 "num_base_bdevs_discovered": 4, 00:09:46.337 "num_base_bdevs_operational": 4, 00:09:46.337 "base_bdevs_list": [ 00:09:46.337 { 00:09:46.337 "name": "NewBaseBdev", 00:09:46.337 "uuid": "0b8e244b-3f54-4d6b-8ad8-9882939011b2", 00:09:46.337 "is_configured": true, 00:09:46.337 "data_offset": 2048, 00:09:46.337 "data_size": 63488 00:09:46.337 }, 00:09:46.337 { 00:09:46.337 "name": "BaseBdev2", 00:09:46.337 "uuid": "36fec8e9-bf47-4d8c-9a0c-482dd97b8944", 00:09:46.337 "is_configured": true, 00:09:46.337 "data_offset": 2048, 00:09:46.337 "data_size": 63488 00:09:46.337 }, 00:09:46.337 { 00:09:46.337 "name": "BaseBdev3", 00:09:46.337 "uuid": "91a912e7-4120-4ef9-9863-4e376ec06238", 00:09:46.337 "is_configured": true, 00:09:46.337 "data_offset": 2048, 00:09:46.337 "data_size": 63488 00:09:46.337 }, 00:09:46.337 { 00:09:46.337 "name": "BaseBdev4", 00:09:46.337 "uuid": "05346068-3643-4fc3-ad77-28aee00d42b9", 00:09:46.337 "is_configured": true, 00:09:46.337 "data_offset": 2048, 00:09:46.337 "data_size": 63488 00:09:46.337 } 00:09:46.337 ] 00:09:46.337 } 00:09:46.337 } 00:09:46.337 }' 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:46.337 BaseBdev2 00:09:46.337 BaseBdev3 00:09:46.337 BaseBdev4' 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.337 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.338 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.338 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.338 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.338 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.338 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.338 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.338 12:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.338 12:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.338 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.597 [2024-11-26 12:53:04.142336] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.597 [2024-11-26 12:53:04.142399] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.597 [2024-11-26 12:53:04.142499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.597 [2024-11-26 12:53:04.142577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.597 [2024-11-26 12:53:04.142610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81229 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81229 ']' 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81229 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81229 00:09:46.597 killing process with pid 81229 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81229' 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81229 00:09:46.597 [2024-11-26 12:53:04.190424] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.597 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81229 00:09:46.597 [2024-11-26 12:53:04.231123] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.856 12:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.856 ************************************ 00:09:46.856 END TEST raid_state_function_test_sb 00:09:46.856 ************************************ 00:09:46.856 00:09:46.856 real 0m9.228s 00:09:46.856 user 0m15.753s 00:09:46.856 sys 0m1.923s 00:09:46.856 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.856 12:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.116 12:53:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:47.116 12:53:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:47.116 12:53:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.116 12:53:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.116 ************************************ 00:09:47.116 START TEST raid_superblock_test 00:09:47.116 ************************************ 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81877 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81877 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81877 ']' 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.116 12:53:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.116 [2024-11-26 12:53:04.641785] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:47.117 [2024-11-26 12:53:04.641991] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81877 ] 00:09:47.376 [2024-11-26 12:53:04.799005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.376 [2024-11-26 12:53:04.843891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.376 [2024-11-26 12:53:04.885717] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.376 [2024-11-26 12:53:04.885833] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.944 malloc1 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.944 [2024-11-26 12:53:05.487858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.944 [2024-11-26 12:53:05.487971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.944 [2024-11-26 12:53:05.488010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:47.944 [2024-11-26 12:53:05.488043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.944 [2024-11-26 12:53:05.490141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.944 [2024-11-26 12:53:05.490227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.944 pt1 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.944 malloc2 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.944 [2024-11-26 12:53:05.529089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.944 [2024-11-26 12:53:05.529197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.944 [2024-11-26 12:53:05.529218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:47.944 [2024-11-26 12:53:05.529228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.944 [2024-11-26 12:53:05.531300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.944 [2024-11-26 12:53:05.531336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.944 pt2 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.944 malloc3 00:09:47.944 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.945 [2024-11-26 12:53:05.557595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.945 [2024-11-26 12:53:05.557692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.945 [2024-11-26 12:53:05.557726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:47.945 [2024-11-26 12:53:05.557754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.945 [2024-11-26 12:53:05.559811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.945 [2024-11-26 12:53:05.559896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.945 pt3 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.945 malloc4 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.945 [2024-11-26 12:53:05.590013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:47.945 [2024-11-26 12:53:05.590110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.945 [2024-11-26 12:53:05.590142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:47.945 [2024-11-26 12:53:05.590181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.945 [2024-11-26 12:53:05.592159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.945 [2024-11-26 12:53:05.592257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:47.945 pt4 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.945 [2024-11-26 12:53:05.602062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.945 [2024-11-26 12:53:05.603875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.945 [2024-11-26 12:53:05.603983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.945 [2024-11-26 12:53:05.604063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:47.945 [2024-11-26 12:53:05.604263] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:47.945 [2024-11-26 12:53:05.604318] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:47.945 [2024-11-26 12:53:05.604604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:47.945 [2024-11-26 12:53:05.604779] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:47.945 [2024-11-26 12:53:05.604820] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:47.945 [2024-11-26 12:53:05.604974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.945 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.204 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.204 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.204 "name": "raid_bdev1", 00:09:48.204 "uuid": "ef74fc45-48a8-4b89-8981-2dec2aa3236c", 00:09:48.204 "strip_size_kb": 64, 00:09:48.204 "state": "online", 00:09:48.204 "raid_level": "raid0", 00:09:48.204 "superblock": true, 00:09:48.204 "num_base_bdevs": 4, 00:09:48.204 "num_base_bdevs_discovered": 4, 00:09:48.204 "num_base_bdevs_operational": 4, 00:09:48.204 "base_bdevs_list": [ 00:09:48.204 { 00:09:48.204 "name": "pt1", 00:09:48.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.204 "is_configured": true, 00:09:48.204 "data_offset": 2048, 00:09:48.204 "data_size": 63488 00:09:48.204 }, 00:09:48.204 { 00:09:48.204 "name": "pt2", 00:09:48.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.204 "is_configured": true, 00:09:48.204 "data_offset": 2048, 00:09:48.204 "data_size": 63488 00:09:48.204 }, 00:09:48.204 { 00:09:48.204 "name": "pt3", 00:09:48.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.204 "is_configured": true, 00:09:48.204 "data_offset": 2048, 00:09:48.204 "data_size": 63488 00:09:48.204 }, 00:09:48.204 { 00:09:48.204 "name": "pt4", 00:09:48.204 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:48.204 "is_configured": true, 00:09:48.204 "data_offset": 2048, 00:09:48.204 "data_size": 63488 00:09:48.204 } 00:09:48.204 ] 00:09:48.204 }' 00:09:48.204 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.204 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.463 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:48.463 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:48.463 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.463 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.463 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.463 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.463 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.463 12:53:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.463 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.463 12:53:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.463 [2024-11-26 12:53:06.001609] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.463 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.463 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.463 "name": "raid_bdev1", 00:09:48.463 "aliases": [ 00:09:48.463 "ef74fc45-48a8-4b89-8981-2dec2aa3236c" 00:09:48.463 ], 00:09:48.463 "product_name": "Raid Volume", 00:09:48.463 "block_size": 512, 00:09:48.463 "num_blocks": 253952, 00:09:48.463 "uuid": "ef74fc45-48a8-4b89-8981-2dec2aa3236c", 00:09:48.463 "assigned_rate_limits": { 00:09:48.463 "rw_ios_per_sec": 0, 00:09:48.463 "rw_mbytes_per_sec": 0, 00:09:48.463 "r_mbytes_per_sec": 0, 00:09:48.463 "w_mbytes_per_sec": 0 00:09:48.463 }, 00:09:48.463 "claimed": false, 00:09:48.463 "zoned": false, 00:09:48.463 "supported_io_types": { 00:09:48.463 "read": true, 00:09:48.463 "write": true, 00:09:48.463 "unmap": true, 00:09:48.463 "flush": true, 00:09:48.463 "reset": true, 00:09:48.463 "nvme_admin": false, 00:09:48.463 "nvme_io": false, 00:09:48.463 "nvme_io_md": false, 00:09:48.463 "write_zeroes": true, 00:09:48.463 "zcopy": false, 00:09:48.463 "get_zone_info": false, 00:09:48.463 "zone_management": false, 00:09:48.463 "zone_append": false, 00:09:48.463 "compare": false, 00:09:48.463 "compare_and_write": false, 00:09:48.463 "abort": false, 00:09:48.463 "seek_hole": false, 00:09:48.463 "seek_data": false, 00:09:48.463 "copy": false, 00:09:48.463 "nvme_iov_md": false 00:09:48.463 }, 00:09:48.463 "memory_domains": [ 00:09:48.463 { 00:09:48.463 "dma_device_id": "system", 00:09:48.463 "dma_device_type": 1 00:09:48.463 }, 00:09:48.463 { 00:09:48.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.463 "dma_device_type": 2 00:09:48.463 }, 00:09:48.463 { 00:09:48.463 "dma_device_id": "system", 00:09:48.463 "dma_device_type": 1 00:09:48.463 }, 00:09:48.463 { 00:09:48.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.463 "dma_device_type": 2 00:09:48.463 }, 00:09:48.463 { 00:09:48.463 "dma_device_id": "system", 00:09:48.463 "dma_device_type": 1 00:09:48.463 }, 00:09:48.463 { 00:09:48.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.463 "dma_device_type": 2 00:09:48.463 }, 00:09:48.463 { 00:09:48.463 "dma_device_id": "system", 00:09:48.463 "dma_device_type": 1 00:09:48.463 }, 00:09:48.463 { 00:09:48.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.463 "dma_device_type": 2 00:09:48.463 } 00:09:48.463 ], 00:09:48.463 "driver_specific": { 00:09:48.463 "raid": { 00:09:48.463 "uuid": "ef74fc45-48a8-4b89-8981-2dec2aa3236c", 00:09:48.463 "strip_size_kb": 64, 00:09:48.463 "state": "online", 00:09:48.463 "raid_level": "raid0", 00:09:48.463 "superblock": true, 00:09:48.463 "num_base_bdevs": 4, 00:09:48.463 "num_base_bdevs_discovered": 4, 00:09:48.463 "num_base_bdevs_operational": 4, 00:09:48.463 "base_bdevs_list": [ 00:09:48.463 { 00:09:48.463 "name": "pt1", 00:09:48.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.463 "is_configured": true, 00:09:48.463 "data_offset": 2048, 00:09:48.463 "data_size": 63488 00:09:48.463 }, 00:09:48.463 { 00:09:48.463 "name": "pt2", 00:09:48.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.463 "is_configured": true, 00:09:48.463 "data_offset": 2048, 00:09:48.463 "data_size": 63488 00:09:48.463 }, 00:09:48.463 { 00:09:48.463 "name": "pt3", 00:09:48.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.463 "is_configured": true, 00:09:48.463 "data_offset": 2048, 00:09:48.463 "data_size": 63488 00:09:48.463 }, 00:09:48.463 { 00:09:48.463 "name": "pt4", 00:09:48.463 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:48.463 "is_configured": true, 00:09:48.463 "data_offset": 2048, 00:09:48.463 "data_size": 63488 00:09:48.463 } 00:09:48.463 ] 00:09:48.463 } 00:09:48.463 } 00:09:48.463 }' 00:09:48.463 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.464 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:48.464 pt2 00:09:48.464 pt3 00:09:48.464 pt4' 00:09:48.464 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.464 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.464 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.464 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.464 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:48.464 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.464 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:48.723 [2024-11-26 12:53:06.305099] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ef74fc45-48a8-4b89-8981-2dec2aa3236c 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ef74fc45-48a8-4b89-8981-2dec2aa3236c ']' 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 [2024-11-26 12:53:06.356724] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.723 [2024-11-26 12:53:06.356755] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.723 [2024-11-26 12:53:06.356819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.723 [2024-11-26 12:53:06.356880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.723 [2024-11-26 12:53:06.356889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:48.983 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.984 [2024-11-26 12:53:06.520519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:48.984 [2024-11-26 12:53:06.522282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:48.984 [2024-11-26 12:53:06.522322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:48.984 [2024-11-26 12:53:06.522349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:48.984 [2024-11-26 12:53:06.522391] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:48.984 [2024-11-26 12:53:06.522439] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:48.984 [2024-11-26 12:53:06.522461] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:48.984 [2024-11-26 12:53:06.522475] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:48.984 [2024-11-26 12:53:06.522488] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.984 [2024-11-26 12:53:06.522497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:48.984 request: 00:09:48.984 { 00:09:48.984 "name": "raid_bdev1", 00:09:48.984 "raid_level": "raid0", 00:09:48.984 "base_bdevs": [ 00:09:48.984 "malloc1", 00:09:48.984 "malloc2", 00:09:48.984 "malloc3", 00:09:48.984 "malloc4" 00:09:48.984 ], 00:09:48.984 "strip_size_kb": 64, 00:09:48.984 "superblock": false, 00:09:48.984 "method": "bdev_raid_create", 00:09:48.984 "req_id": 1 00:09:48.984 } 00:09:48.984 Got JSON-RPC error response 00:09:48.984 response: 00:09:48.984 { 00:09:48.984 "code": -17, 00:09:48.984 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:48.984 } 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.984 [2024-11-26 12:53:06.584362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.984 [2024-11-26 12:53:06.584438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.984 [2024-11-26 12:53:06.584473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:48.984 [2024-11-26 12:53:06.584499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.984 [2024-11-26 12:53:06.586485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.984 [2024-11-26 12:53:06.586563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.984 [2024-11-26 12:53:06.586646] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.984 [2024-11-26 12:53:06.586724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.984 pt1 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.984 "name": "raid_bdev1", 00:09:48.984 "uuid": "ef74fc45-48a8-4b89-8981-2dec2aa3236c", 00:09:48.984 "strip_size_kb": 64, 00:09:48.984 "state": "configuring", 00:09:48.984 "raid_level": "raid0", 00:09:48.984 "superblock": true, 00:09:48.984 "num_base_bdevs": 4, 00:09:48.984 "num_base_bdevs_discovered": 1, 00:09:48.984 "num_base_bdevs_operational": 4, 00:09:48.984 "base_bdevs_list": [ 00:09:48.984 { 00:09:48.984 "name": "pt1", 00:09:48.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.984 "is_configured": true, 00:09:48.984 "data_offset": 2048, 00:09:48.984 "data_size": 63488 00:09:48.984 }, 00:09:48.984 { 00:09:48.984 "name": null, 00:09:48.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.984 "is_configured": false, 00:09:48.984 "data_offset": 2048, 00:09:48.984 "data_size": 63488 00:09:48.984 }, 00:09:48.984 { 00:09:48.984 "name": null, 00:09:48.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.984 "is_configured": false, 00:09:48.984 "data_offset": 2048, 00:09:48.984 "data_size": 63488 00:09:48.984 }, 00:09:48.984 { 00:09:48.984 "name": null, 00:09:48.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:48.984 "is_configured": false, 00:09:48.984 "data_offset": 2048, 00:09:48.984 "data_size": 63488 00:09:48.984 } 00:09:48.984 ] 00:09:48.984 }' 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.984 12:53:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.552 [2024-11-26 12:53:07.035589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.552 [2024-11-26 12:53:07.035671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.552 [2024-11-26 12:53:07.035706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:49.552 [2024-11-26 12:53:07.035732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.552 [2024-11-26 12:53:07.036084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.552 [2024-11-26 12:53:07.036135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.552 [2024-11-26 12:53:07.036229] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.552 [2024-11-26 12:53:07.036276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.552 pt2 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.552 [2024-11-26 12:53:07.047584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.552 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.553 "name": "raid_bdev1", 00:09:49.553 "uuid": "ef74fc45-48a8-4b89-8981-2dec2aa3236c", 00:09:49.553 "strip_size_kb": 64, 00:09:49.553 "state": "configuring", 00:09:49.553 "raid_level": "raid0", 00:09:49.553 "superblock": true, 00:09:49.553 "num_base_bdevs": 4, 00:09:49.553 "num_base_bdevs_discovered": 1, 00:09:49.553 "num_base_bdevs_operational": 4, 00:09:49.553 "base_bdevs_list": [ 00:09:49.553 { 00:09:49.553 "name": "pt1", 00:09:49.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.553 "is_configured": true, 00:09:49.553 "data_offset": 2048, 00:09:49.553 "data_size": 63488 00:09:49.553 }, 00:09:49.553 { 00:09:49.553 "name": null, 00:09:49.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.553 "is_configured": false, 00:09:49.553 "data_offset": 0, 00:09:49.553 "data_size": 63488 00:09:49.553 }, 00:09:49.553 { 00:09:49.553 "name": null, 00:09:49.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.553 "is_configured": false, 00:09:49.553 "data_offset": 2048, 00:09:49.553 "data_size": 63488 00:09:49.553 }, 00:09:49.553 { 00:09:49.553 "name": null, 00:09:49.553 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:49.553 "is_configured": false, 00:09:49.553 "data_offset": 2048, 00:09:49.553 "data_size": 63488 00:09:49.553 } 00:09:49.553 ] 00:09:49.553 }' 00:09:49.553 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.553 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.812 [2024-11-26 12:53:07.466884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.812 [2024-11-26 12:53:07.466938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.812 [2024-11-26 12:53:07.466954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:49.812 [2024-11-26 12:53:07.466963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.812 [2024-11-26 12:53:07.467316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.812 [2024-11-26 12:53:07.467357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.812 [2024-11-26 12:53:07.467418] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.812 [2024-11-26 12:53:07.467440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.812 pt2 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.812 [2024-11-26 12:53:07.482831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:49.812 [2024-11-26 12:53:07.482882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.812 [2024-11-26 12:53:07.482899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:49.812 [2024-11-26 12:53:07.482908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.812 [2024-11-26 12:53:07.483231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.812 [2024-11-26 12:53:07.483261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:49.812 [2024-11-26 12:53:07.483323] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:49.812 [2024-11-26 12:53:07.483342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:49.812 pt3 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.812 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.072 [2024-11-26 12:53:07.490827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:50.072 [2024-11-26 12:53:07.490918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.072 [2024-11-26 12:53:07.490939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:50.072 [2024-11-26 12:53:07.490949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.072 [2024-11-26 12:53:07.491265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.072 [2024-11-26 12:53:07.491296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:50.072 [2024-11-26 12:53:07.491348] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:50.072 [2024-11-26 12:53:07.491368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:50.072 [2024-11-26 12:53:07.491486] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:50.072 [2024-11-26 12:53:07.491506] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:50.072 [2024-11-26 12:53:07.491753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:50.072 [2024-11-26 12:53:07.491874] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:50.072 [2024-11-26 12:53:07.491885] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:50.072 [2024-11-26 12:53:07.491987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.072 pt4 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.072 "name": "raid_bdev1", 00:09:50.072 "uuid": "ef74fc45-48a8-4b89-8981-2dec2aa3236c", 00:09:50.072 "strip_size_kb": 64, 00:09:50.072 "state": "online", 00:09:50.072 "raid_level": "raid0", 00:09:50.072 "superblock": true, 00:09:50.072 "num_base_bdevs": 4, 00:09:50.072 "num_base_bdevs_discovered": 4, 00:09:50.072 "num_base_bdevs_operational": 4, 00:09:50.072 "base_bdevs_list": [ 00:09:50.072 { 00:09:50.072 "name": "pt1", 00:09:50.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.072 "is_configured": true, 00:09:50.072 "data_offset": 2048, 00:09:50.072 "data_size": 63488 00:09:50.072 }, 00:09:50.072 { 00:09:50.072 "name": "pt2", 00:09:50.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.072 "is_configured": true, 00:09:50.072 "data_offset": 2048, 00:09:50.072 "data_size": 63488 00:09:50.072 }, 00:09:50.072 { 00:09:50.072 "name": "pt3", 00:09:50.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.072 "is_configured": true, 00:09:50.072 "data_offset": 2048, 00:09:50.072 "data_size": 63488 00:09:50.072 }, 00:09:50.072 { 00:09:50.072 "name": "pt4", 00:09:50.072 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:50.072 "is_configured": true, 00:09:50.072 "data_offset": 2048, 00:09:50.072 "data_size": 63488 00:09:50.072 } 00:09:50.072 ] 00:09:50.072 }' 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.072 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.332 [2024-11-26 12:53:07.982294] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.332 12:53:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.592 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.592 "name": "raid_bdev1", 00:09:50.592 "aliases": [ 00:09:50.592 "ef74fc45-48a8-4b89-8981-2dec2aa3236c" 00:09:50.592 ], 00:09:50.592 "product_name": "Raid Volume", 00:09:50.592 "block_size": 512, 00:09:50.592 "num_blocks": 253952, 00:09:50.592 "uuid": "ef74fc45-48a8-4b89-8981-2dec2aa3236c", 00:09:50.592 "assigned_rate_limits": { 00:09:50.592 "rw_ios_per_sec": 0, 00:09:50.592 "rw_mbytes_per_sec": 0, 00:09:50.592 "r_mbytes_per_sec": 0, 00:09:50.592 "w_mbytes_per_sec": 0 00:09:50.592 }, 00:09:50.592 "claimed": false, 00:09:50.592 "zoned": false, 00:09:50.592 "supported_io_types": { 00:09:50.592 "read": true, 00:09:50.592 "write": true, 00:09:50.592 "unmap": true, 00:09:50.592 "flush": true, 00:09:50.592 "reset": true, 00:09:50.592 "nvme_admin": false, 00:09:50.592 "nvme_io": false, 00:09:50.592 "nvme_io_md": false, 00:09:50.592 "write_zeroes": true, 00:09:50.592 "zcopy": false, 00:09:50.592 "get_zone_info": false, 00:09:50.592 "zone_management": false, 00:09:50.592 "zone_append": false, 00:09:50.592 "compare": false, 00:09:50.592 "compare_and_write": false, 00:09:50.592 "abort": false, 00:09:50.592 "seek_hole": false, 00:09:50.592 "seek_data": false, 00:09:50.592 "copy": false, 00:09:50.592 "nvme_iov_md": false 00:09:50.592 }, 00:09:50.592 "memory_domains": [ 00:09:50.592 { 00:09:50.592 "dma_device_id": "system", 00:09:50.592 "dma_device_type": 1 00:09:50.592 }, 00:09:50.592 { 00:09:50.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.592 "dma_device_type": 2 00:09:50.592 }, 00:09:50.592 { 00:09:50.592 "dma_device_id": "system", 00:09:50.592 "dma_device_type": 1 00:09:50.592 }, 00:09:50.592 { 00:09:50.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.592 "dma_device_type": 2 00:09:50.592 }, 00:09:50.592 { 00:09:50.592 "dma_device_id": "system", 00:09:50.592 "dma_device_type": 1 00:09:50.592 }, 00:09:50.592 { 00:09:50.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.592 "dma_device_type": 2 00:09:50.592 }, 00:09:50.592 { 00:09:50.592 "dma_device_id": "system", 00:09:50.592 "dma_device_type": 1 00:09:50.592 }, 00:09:50.592 { 00:09:50.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.592 "dma_device_type": 2 00:09:50.592 } 00:09:50.592 ], 00:09:50.592 "driver_specific": { 00:09:50.592 "raid": { 00:09:50.592 "uuid": "ef74fc45-48a8-4b89-8981-2dec2aa3236c", 00:09:50.592 "strip_size_kb": 64, 00:09:50.592 "state": "online", 00:09:50.592 "raid_level": "raid0", 00:09:50.592 "superblock": true, 00:09:50.592 "num_base_bdevs": 4, 00:09:50.592 "num_base_bdevs_discovered": 4, 00:09:50.592 "num_base_bdevs_operational": 4, 00:09:50.592 "base_bdevs_list": [ 00:09:50.592 { 00:09:50.592 "name": "pt1", 00:09:50.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.592 "is_configured": true, 00:09:50.592 "data_offset": 2048, 00:09:50.593 "data_size": 63488 00:09:50.593 }, 00:09:50.593 { 00:09:50.593 "name": "pt2", 00:09:50.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.593 "is_configured": true, 00:09:50.593 "data_offset": 2048, 00:09:50.593 "data_size": 63488 00:09:50.593 }, 00:09:50.593 { 00:09:50.593 "name": "pt3", 00:09:50.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.593 "is_configured": true, 00:09:50.593 "data_offset": 2048, 00:09:50.593 "data_size": 63488 00:09:50.593 }, 00:09:50.593 { 00:09:50.593 "name": "pt4", 00:09:50.593 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:50.593 "is_configured": true, 00:09:50.593 "data_offset": 2048, 00:09:50.593 "data_size": 63488 00:09:50.593 } 00:09:50.593 ] 00:09:50.593 } 00:09:50.593 } 00:09:50.593 }' 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:50.593 pt2 00:09:50.593 pt3 00:09:50.593 pt4' 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.593 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.853 [2024-11-26 12:53:08.333639] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ef74fc45-48a8-4b89-8981-2dec2aa3236c '!=' ef74fc45-48a8-4b89-8981-2dec2aa3236c ']' 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81877 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81877 ']' 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81877 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81877 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81877' 00:09:50.853 killing process with pid 81877 00:09:50.853 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81877 00:09:50.853 [2024-11-26 12:53:08.400965] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.853 [2024-11-26 12:53:08.401087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.853 [2024-11-26 12:53:08.401187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.853 [2024-11-26 12:53:08.401234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, sta 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81877 00:09:50.853 te offline 00:09:50.853 [2024-11-26 12:53:08.444810] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.113 12:53:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:51.113 ************************************ 00:09:51.113 END TEST raid_superblock_test 00:09:51.113 ************************************ 00:09:51.113 00:09:51.113 real 0m4.135s 00:09:51.113 user 0m6.507s 00:09:51.113 sys 0m0.894s 00:09:51.113 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.113 12:53:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.113 12:53:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:51.113 12:53:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:51.113 12:53:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.113 12:53:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.114 ************************************ 00:09:51.114 START TEST raid_read_error_test 00:09:51.114 ************************************ 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QazYOQcNKz 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82125 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82125 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 82125 ']' 00:09:51.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.114 12:53:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.374 [2024-11-26 12:53:08.858639] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:51.374 [2024-11-26 12:53:08.858747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82125 ] 00:09:51.374 [2024-11-26 12:53:09.017254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.634 [2024-11-26 12:53:09.062164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.634 [2024-11-26 12:53:09.104299] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.634 [2024-11-26 12:53:09.104333] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 BaseBdev1_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 true 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 [2024-11-26 12:53:09.714341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.204 [2024-11-26 12:53:09.714407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.204 [2024-11-26 12:53:09.714426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.204 [2024-11-26 12:53:09.714434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.204 [2024-11-26 12:53:09.716481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.204 [2024-11-26 12:53:09.716518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.204 BaseBdev1 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 BaseBdev2_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 true 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 [2024-11-26 12:53:09.769059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.204 [2024-11-26 12:53:09.769131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.204 [2024-11-26 12:53:09.769161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.204 [2024-11-26 12:53:09.769199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.204 [2024-11-26 12:53:09.772534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.204 [2024-11-26 12:53:09.772653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.204 BaseBdev2 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 BaseBdev3_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 true 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 [2024-11-26 12:53:09.809985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:52.204 [2024-11-26 12:53:09.810028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.204 [2024-11-26 12:53:09.810045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:52.204 [2024-11-26 12:53:09.810053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.204 [2024-11-26 12:53:09.812092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.204 [2024-11-26 12:53:09.812124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:52.204 BaseBdev3 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 BaseBdev4_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.204 true 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:52.204 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.205 [2024-11-26 12:53:09.842549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:52.205 [2024-11-26 12:53:09.842590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.205 [2024-11-26 12:53:09.842609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:52.205 [2024-11-26 12:53:09.842617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.205 [2024-11-26 12:53:09.844567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.205 [2024-11-26 12:53:09.844602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:52.205 BaseBdev4 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.205 [2024-11-26 12:53:09.850580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.205 [2024-11-26 12:53:09.852338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.205 [2024-11-26 12:53:09.852422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.205 [2024-11-26 12:53:09.852474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:52.205 [2024-11-26 12:53:09.852663] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:52.205 [2024-11-26 12:53:09.852674] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:52.205 [2024-11-26 12:53:09.852908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:52.205 [2024-11-26 12:53:09.853037] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:52.205 [2024-11-26 12:53:09.853049] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:52.205 [2024-11-26 12:53:09.853174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.205 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.465 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.465 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.465 "name": "raid_bdev1", 00:09:52.465 "uuid": "dfc13411-cc34-4c92-b6a4-08fdafda5a4a", 00:09:52.465 "strip_size_kb": 64, 00:09:52.465 "state": "online", 00:09:52.465 "raid_level": "raid0", 00:09:52.465 "superblock": true, 00:09:52.465 "num_base_bdevs": 4, 00:09:52.465 "num_base_bdevs_discovered": 4, 00:09:52.465 "num_base_bdevs_operational": 4, 00:09:52.465 "base_bdevs_list": [ 00:09:52.465 { 00:09:52.465 "name": "BaseBdev1", 00:09:52.465 "uuid": "488969fe-2b0a-5f59-903d-2d996248c77d", 00:09:52.465 "is_configured": true, 00:09:52.465 "data_offset": 2048, 00:09:52.465 "data_size": 63488 00:09:52.465 }, 00:09:52.465 { 00:09:52.465 "name": "BaseBdev2", 00:09:52.465 "uuid": "05537e76-7679-51ee-961d-93b99c509a4a", 00:09:52.465 "is_configured": true, 00:09:52.465 "data_offset": 2048, 00:09:52.465 "data_size": 63488 00:09:52.465 }, 00:09:52.465 { 00:09:52.465 "name": "BaseBdev3", 00:09:52.465 "uuid": "82fcc959-4397-563d-ade1-2e54b86e7b8b", 00:09:52.465 "is_configured": true, 00:09:52.465 "data_offset": 2048, 00:09:52.465 "data_size": 63488 00:09:52.465 }, 00:09:52.465 { 00:09:52.465 "name": "BaseBdev4", 00:09:52.465 "uuid": "e4b8b29d-69ec-5eb1-9f9c-819b0e4b18dc", 00:09:52.465 "is_configured": true, 00:09:52.465 "data_offset": 2048, 00:09:52.466 "data_size": 63488 00:09:52.466 } 00:09:52.466 ] 00:09:52.466 }' 00:09:52.466 12:53:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.466 12:53:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.776 12:53:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.776 12:53:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.776 [2024-11-26 12:53:10.378049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.715 "name": "raid_bdev1", 00:09:53.715 "uuid": "dfc13411-cc34-4c92-b6a4-08fdafda5a4a", 00:09:53.715 "strip_size_kb": 64, 00:09:53.715 "state": "online", 00:09:53.715 "raid_level": "raid0", 00:09:53.715 "superblock": true, 00:09:53.715 "num_base_bdevs": 4, 00:09:53.715 "num_base_bdevs_discovered": 4, 00:09:53.715 "num_base_bdevs_operational": 4, 00:09:53.715 "base_bdevs_list": [ 00:09:53.715 { 00:09:53.715 "name": "BaseBdev1", 00:09:53.715 "uuid": "488969fe-2b0a-5f59-903d-2d996248c77d", 00:09:53.715 "is_configured": true, 00:09:53.715 "data_offset": 2048, 00:09:53.715 "data_size": 63488 00:09:53.715 }, 00:09:53.715 { 00:09:53.715 "name": "BaseBdev2", 00:09:53.715 "uuid": "05537e76-7679-51ee-961d-93b99c509a4a", 00:09:53.715 "is_configured": true, 00:09:53.715 "data_offset": 2048, 00:09:53.715 "data_size": 63488 00:09:53.715 }, 00:09:53.715 { 00:09:53.715 "name": "BaseBdev3", 00:09:53.715 "uuid": "82fcc959-4397-563d-ade1-2e54b86e7b8b", 00:09:53.715 "is_configured": true, 00:09:53.715 "data_offset": 2048, 00:09:53.715 "data_size": 63488 00:09:53.715 }, 00:09:53.715 { 00:09:53.715 "name": "BaseBdev4", 00:09:53.715 "uuid": "e4b8b29d-69ec-5eb1-9f9c-819b0e4b18dc", 00:09:53.715 "is_configured": true, 00:09:53.715 "data_offset": 2048, 00:09:53.715 "data_size": 63488 00:09:53.715 } 00:09:53.715 ] 00:09:53.715 }' 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.715 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.285 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.285 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.285 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.285 [2024-11-26 12:53:11.753492] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.285 [2024-11-26 12:53:11.753574] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.285 [2024-11-26 12:53:11.756084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.285 [2024-11-26 12:53:11.756199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.285 [2024-11-26 12:53:11.756256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.285 [2024-11-26 12:53:11.756266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:54.285 { 00:09:54.285 "results": [ 00:09:54.285 { 00:09:54.285 "job": "raid_bdev1", 00:09:54.286 "core_mask": "0x1", 00:09:54.286 "workload": "randrw", 00:09:54.286 "percentage": 50, 00:09:54.286 "status": "finished", 00:09:54.286 "queue_depth": 1, 00:09:54.286 "io_size": 131072, 00:09:54.286 "runtime": 1.376284, 00:09:54.286 "iops": 17299.481793002025, 00:09:54.286 "mibps": 2162.435224125253, 00:09:54.286 "io_failed": 1, 00:09:54.286 "io_timeout": 0, 00:09:54.286 "avg_latency_us": 80.18067576465064, 00:09:54.286 "min_latency_us": 24.593886462882097, 00:09:54.286 "max_latency_us": 1366.5257641921398 00:09:54.286 } 00:09:54.286 ], 00:09:54.286 "core_count": 1 00:09:54.286 } 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82125 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 82125 ']' 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 82125 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82125 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.286 killing process with pid 82125 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82125' 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 82125 00:09:54.286 [2024-11-26 12:53:11.797931] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.286 12:53:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 82125 00:09:54.286 [2024-11-26 12:53:11.833632] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.546 12:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QazYOQcNKz 00:09:54.546 12:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:54.546 12:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:54.546 12:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:54.546 12:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:54.546 12:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:54.546 12:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:54.546 ************************************ 00:09:54.546 END TEST raid_read_error_test 00:09:54.546 12:53:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:54.546 00:09:54.546 real 0m3.316s 00:09:54.546 user 0m4.159s 00:09:54.546 sys 0m0.562s 00:09:54.546 12:53:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.546 12:53:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.546 ************************************ 00:09:54.546 12:53:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:54.546 12:53:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:54.546 12:53:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.546 12:53:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.546 ************************************ 00:09:54.546 START TEST raid_write_error_test 00:09:54.546 ************************************ 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q2dpcnPi3P 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82254 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82254 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82254 ']' 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.546 12:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.547 12:53:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.807 [2024-11-26 12:53:12.254326] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:54.807 [2024-11-26 12:53:12.254525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82254 ] 00:09:54.807 [2024-11-26 12:53:12.408083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.807 [2024-11-26 12:53:12.452969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.067 [2024-11-26 12:53:12.495142] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.067 [2024-11-26 12:53:12.495190] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 BaseBdev1_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 true 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 [2024-11-26 12:53:13.097274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:55.639 [2024-11-26 12:53:13.097323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.639 [2024-11-26 12:53:13.097342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:55.639 [2024-11-26 12:53:13.097351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.639 [2024-11-26 12:53:13.099453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.639 [2024-11-26 12:53:13.099487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:55.639 BaseBdev1 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 BaseBdev2_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 true 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 [2024-11-26 12:53:13.158739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:55.639 [2024-11-26 12:53:13.158796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.639 [2024-11-26 12:53:13.158820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:55.639 [2024-11-26 12:53:13.158831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.639 [2024-11-26 12:53:13.161424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.639 [2024-11-26 12:53:13.161465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:55.639 BaseBdev2 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 BaseBdev3_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 true 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 [2024-11-26 12:53:13.199058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:55.639 [2024-11-26 12:53:13.199101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.639 [2024-11-26 12:53:13.199121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:55.639 [2024-11-26 12:53:13.199130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.639 [2024-11-26 12:53:13.201117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.639 [2024-11-26 12:53:13.201151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:55.639 BaseBdev3 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 BaseBdev4_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 true 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 [2024-11-26 12:53:13.239411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:55.639 [2024-11-26 12:53:13.239451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.639 [2024-11-26 12:53:13.239472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:55.639 [2024-11-26 12:53:13.239480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.639 [2024-11-26 12:53:13.241434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.639 [2024-11-26 12:53:13.241467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:55.639 BaseBdev4 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.639 [2024-11-26 12:53:13.251438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.639 [2024-11-26 12:53:13.253244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.639 [2024-11-26 12:53:13.253325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.639 [2024-11-26 12:53:13.253377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:55.639 [2024-11-26 12:53:13.253560] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:55.639 [2024-11-26 12:53:13.253572] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:55.639 [2024-11-26 12:53:13.253812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:55.639 [2024-11-26 12:53:13.253940] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:55.639 [2024-11-26 12:53:13.253951] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:55.639 [2024-11-26 12:53:13.254063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.639 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.640 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.640 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.640 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.640 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.640 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.640 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.640 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.640 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.640 "name": "raid_bdev1", 00:09:55.640 "uuid": "b3aa20d7-9667-4bcc-8476-9125a08d4c25", 00:09:55.640 "strip_size_kb": 64, 00:09:55.640 "state": "online", 00:09:55.640 "raid_level": "raid0", 00:09:55.640 "superblock": true, 00:09:55.640 "num_base_bdevs": 4, 00:09:55.640 "num_base_bdevs_discovered": 4, 00:09:55.640 "num_base_bdevs_operational": 4, 00:09:55.640 "base_bdevs_list": [ 00:09:55.640 { 00:09:55.640 "name": "BaseBdev1", 00:09:55.640 "uuid": "4481aabe-66ec-5e24-81b9-061f95675cd5", 00:09:55.640 "is_configured": true, 00:09:55.640 "data_offset": 2048, 00:09:55.640 "data_size": 63488 00:09:55.640 }, 00:09:55.640 { 00:09:55.640 "name": "BaseBdev2", 00:09:55.640 "uuid": "313c8700-3902-5068-9815-6959e03743d0", 00:09:55.640 "is_configured": true, 00:09:55.640 "data_offset": 2048, 00:09:55.640 "data_size": 63488 00:09:55.640 }, 00:09:55.640 { 00:09:55.640 "name": "BaseBdev3", 00:09:55.640 "uuid": "ee12cc80-699f-5e9c-afe9-40ed281f3b16", 00:09:55.640 "is_configured": true, 00:09:55.640 "data_offset": 2048, 00:09:55.640 "data_size": 63488 00:09:55.640 }, 00:09:55.640 { 00:09:55.640 "name": "BaseBdev4", 00:09:55.640 "uuid": "b979ff8d-7893-5513-a767-0fc1c39d577c", 00:09:55.640 "is_configured": true, 00:09:55.640 "data_offset": 2048, 00:09:55.640 "data_size": 63488 00:09:55.640 } 00:09:55.640 ] 00:09:55.640 }' 00:09:55.640 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.640 12:53:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.210 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:56.210 12:53:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:56.210 [2024-11-26 12:53:13.754905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.151 "name": "raid_bdev1", 00:09:57.151 "uuid": "b3aa20d7-9667-4bcc-8476-9125a08d4c25", 00:09:57.151 "strip_size_kb": 64, 00:09:57.151 "state": "online", 00:09:57.151 "raid_level": "raid0", 00:09:57.151 "superblock": true, 00:09:57.151 "num_base_bdevs": 4, 00:09:57.151 "num_base_bdevs_discovered": 4, 00:09:57.151 "num_base_bdevs_operational": 4, 00:09:57.151 "base_bdevs_list": [ 00:09:57.151 { 00:09:57.151 "name": "BaseBdev1", 00:09:57.151 "uuid": "4481aabe-66ec-5e24-81b9-061f95675cd5", 00:09:57.151 "is_configured": true, 00:09:57.151 "data_offset": 2048, 00:09:57.151 "data_size": 63488 00:09:57.151 }, 00:09:57.151 { 00:09:57.151 "name": "BaseBdev2", 00:09:57.151 "uuid": "313c8700-3902-5068-9815-6959e03743d0", 00:09:57.151 "is_configured": true, 00:09:57.151 "data_offset": 2048, 00:09:57.151 "data_size": 63488 00:09:57.151 }, 00:09:57.151 { 00:09:57.151 "name": "BaseBdev3", 00:09:57.151 "uuid": "ee12cc80-699f-5e9c-afe9-40ed281f3b16", 00:09:57.151 "is_configured": true, 00:09:57.151 "data_offset": 2048, 00:09:57.151 "data_size": 63488 00:09:57.151 }, 00:09:57.151 { 00:09:57.151 "name": "BaseBdev4", 00:09:57.151 "uuid": "b979ff8d-7893-5513-a767-0fc1c39d577c", 00:09:57.151 "is_configured": true, 00:09:57.151 "data_offset": 2048, 00:09:57.151 "data_size": 63488 00:09:57.151 } 00:09:57.151 ] 00:09:57.151 }' 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.151 12:53:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.722 [2024-11-26 12:53:15.106759] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.722 [2024-11-26 12:53:15.106795] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.722 [2024-11-26 12:53:15.109206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.722 [2024-11-26 12:53:15.109256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.722 [2024-11-26 12:53:15.109300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.722 [2024-11-26 12:53:15.109308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:57.722 { 00:09:57.722 "results": [ 00:09:57.722 { 00:09:57.722 "job": "raid_bdev1", 00:09:57.722 "core_mask": "0x1", 00:09:57.722 "workload": "randrw", 00:09:57.722 "percentage": 50, 00:09:57.722 "status": "finished", 00:09:57.722 "queue_depth": 1, 00:09:57.722 "io_size": 131072, 00:09:57.722 "runtime": 1.352506, 00:09:57.722 "iops": 17289.387255953025, 00:09:57.722 "mibps": 2161.173406994128, 00:09:57.722 "io_failed": 1, 00:09:57.722 "io_timeout": 0, 00:09:57.722 "avg_latency_us": 80.29511456696478, 00:09:57.722 "min_latency_us": 24.482096069868994, 00:09:57.722 "max_latency_us": 1445.2262008733624 00:09:57.722 } 00:09:57.722 ], 00:09:57.722 "core_count": 1 00:09:57.722 } 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82254 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82254 ']' 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82254 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82254 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:57.722 killing process with pid 82254 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82254' 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82254 00:09:57.722 [2024-11-26 12:53:15.146631] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.722 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82254 00:09:57.722 [2024-11-26 12:53:15.182297] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.982 12:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q2dpcnPi3P 00:09:57.982 12:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.982 12:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.982 12:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:57.982 12:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:57.982 ************************************ 00:09:57.982 END TEST raid_write_error_test 00:09:57.982 ************************************ 00:09:57.982 12:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.982 12:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.982 12:53:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:57.982 00:09:57.982 real 0m3.270s 00:09:57.982 user 0m4.062s 00:09:57.982 sys 0m0.533s 00:09:57.982 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.982 12:53:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.982 12:53:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:57.982 12:53:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:57.982 12:53:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:57.982 12:53:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.982 12:53:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.982 ************************************ 00:09:57.982 START TEST raid_state_function_test 00:09:57.982 ************************************ 00:09:57.982 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:09:57.982 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:57.982 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:57.982 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:57.982 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.982 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.982 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82381 00:09:57.983 Process raid pid: 82381 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82381' 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82381 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82381 ']' 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.983 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.983 [2024-11-26 12:53:15.602983] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:57.983 [2024-11-26 12:53:15.603219] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.243 [2024-11-26 12:53:15.768929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.243 [2024-11-26 12:53:15.813300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.243 [2024-11-26 12:53:15.855404] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.243 [2024-11-26 12:53:15.855520] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.813 [2024-11-26 12:53:16.416748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.813 [2024-11-26 12:53:16.416794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.813 [2024-11-26 12:53:16.416805] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.813 [2024-11-26 12:53:16.416817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.813 [2024-11-26 12:53:16.416823] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.813 [2024-11-26 12:53:16.416834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.813 [2024-11-26 12:53:16.416839] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.813 [2024-11-26 12:53:16.416848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.813 "name": "Existed_Raid", 00:09:58.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.813 "strip_size_kb": 64, 00:09:58.813 "state": "configuring", 00:09:58.813 "raid_level": "concat", 00:09:58.813 "superblock": false, 00:09:58.813 "num_base_bdevs": 4, 00:09:58.813 "num_base_bdevs_discovered": 0, 00:09:58.813 "num_base_bdevs_operational": 4, 00:09:58.813 "base_bdevs_list": [ 00:09:58.813 { 00:09:58.813 "name": "BaseBdev1", 00:09:58.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.813 "is_configured": false, 00:09:58.813 "data_offset": 0, 00:09:58.813 "data_size": 0 00:09:58.813 }, 00:09:58.813 { 00:09:58.813 "name": "BaseBdev2", 00:09:58.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.813 "is_configured": false, 00:09:58.813 "data_offset": 0, 00:09:58.813 "data_size": 0 00:09:58.813 }, 00:09:58.813 { 00:09:58.813 "name": "BaseBdev3", 00:09:58.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.813 "is_configured": false, 00:09:58.813 "data_offset": 0, 00:09:58.813 "data_size": 0 00:09:58.813 }, 00:09:58.813 { 00:09:58.813 "name": "BaseBdev4", 00:09:58.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.813 "is_configured": false, 00:09:58.813 "data_offset": 0, 00:09:58.813 "data_size": 0 00:09:58.813 } 00:09:58.813 ] 00:09:58.813 }' 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.813 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.383 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.383 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.383 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.383 [2024-11-26 12:53:16.831963] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.383 [2024-11-26 12:53:16.832045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:59.383 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.383 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.383 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.383 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.383 [2024-11-26 12:53:16.843976] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.383 [2024-11-26 12:53:16.844052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.383 [2024-11-26 12:53:16.844078] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.383 [2024-11-26 12:53:16.844100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.384 [2024-11-26 12:53:16.844118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.384 [2024-11-26 12:53:16.844138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.384 [2024-11-26 12:53:16.844155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:59.384 [2024-11-26 12:53:16.844190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.384 [2024-11-26 12:53:16.864849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.384 BaseBdev1 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.384 [ 00:09:59.384 { 00:09:59.384 "name": "BaseBdev1", 00:09:59.384 "aliases": [ 00:09:59.384 "930953c9-d6a0-4488-897e-d5f2353feb68" 00:09:59.384 ], 00:09:59.384 "product_name": "Malloc disk", 00:09:59.384 "block_size": 512, 00:09:59.384 "num_blocks": 65536, 00:09:59.384 "uuid": "930953c9-d6a0-4488-897e-d5f2353feb68", 00:09:59.384 "assigned_rate_limits": { 00:09:59.384 "rw_ios_per_sec": 0, 00:09:59.384 "rw_mbytes_per_sec": 0, 00:09:59.384 "r_mbytes_per_sec": 0, 00:09:59.384 "w_mbytes_per_sec": 0 00:09:59.384 }, 00:09:59.384 "claimed": true, 00:09:59.384 "claim_type": "exclusive_write", 00:09:59.384 "zoned": false, 00:09:59.384 "supported_io_types": { 00:09:59.384 "read": true, 00:09:59.384 "write": true, 00:09:59.384 "unmap": true, 00:09:59.384 "flush": true, 00:09:59.384 "reset": true, 00:09:59.384 "nvme_admin": false, 00:09:59.384 "nvme_io": false, 00:09:59.384 "nvme_io_md": false, 00:09:59.384 "write_zeroes": true, 00:09:59.384 "zcopy": true, 00:09:59.384 "get_zone_info": false, 00:09:59.384 "zone_management": false, 00:09:59.384 "zone_append": false, 00:09:59.384 "compare": false, 00:09:59.384 "compare_and_write": false, 00:09:59.384 "abort": true, 00:09:59.384 "seek_hole": false, 00:09:59.384 "seek_data": false, 00:09:59.384 "copy": true, 00:09:59.384 "nvme_iov_md": false 00:09:59.384 }, 00:09:59.384 "memory_domains": [ 00:09:59.384 { 00:09:59.384 "dma_device_id": "system", 00:09:59.384 "dma_device_type": 1 00:09:59.384 }, 00:09:59.384 { 00:09:59.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.384 "dma_device_type": 2 00:09:59.384 } 00:09:59.384 ], 00:09:59.384 "driver_specific": {} 00:09:59.384 } 00:09:59.384 ] 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.384 "name": "Existed_Raid", 00:09:59.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.384 "strip_size_kb": 64, 00:09:59.384 "state": "configuring", 00:09:59.384 "raid_level": "concat", 00:09:59.384 "superblock": false, 00:09:59.384 "num_base_bdevs": 4, 00:09:59.384 "num_base_bdevs_discovered": 1, 00:09:59.384 "num_base_bdevs_operational": 4, 00:09:59.384 "base_bdevs_list": [ 00:09:59.384 { 00:09:59.384 "name": "BaseBdev1", 00:09:59.384 "uuid": "930953c9-d6a0-4488-897e-d5f2353feb68", 00:09:59.384 "is_configured": true, 00:09:59.384 "data_offset": 0, 00:09:59.384 "data_size": 65536 00:09:59.384 }, 00:09:59.384 { 00:09:59.384 "name": "BaseBdev2", 00:09:59.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.384 "is_configured": false, 00:09:59.384 "data_offset": 0, 00:09:59.384 "data_size": 0 00:09:59.384 }, 00:09:59.384 { 00:09:59.384 "name": "BaseBdev3", 00:09:59.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.384 "is_configured": false, 00:09:59.384 "data_offset": 0, 00:09:59.384 "data_size": 0 00:09:59.384 }, 00:09:59.384 { 00:09:59.384 "name": "BaseBdev4", 00:09:59.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.384 "is_configured": false, 00:09:59.384 "data_offset": 0, 00:09:59.384 "data_size": 0 00:09:59.384 } 00:09:59.384 ] 00:09:59.384 }' 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.384 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.955 [2024-11-26 12:53:17.368037] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.955 [2024-11-26 12:53:17.368081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.955 [2024-11-26 12:53:17.380040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.955 [2024-11-26 12:53:17.381817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.955 [2024-11-26 12:53:17.381859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.955 [2024-11-26 12:53:17.381869] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.955 [2024-11-26 12:53:17.381877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.955 [2024-11-26 12:53:17.381883] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:59.955 [2024-11-26 12:53:17.381902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.955 "name": "Existed_Raid", 00:09:59.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.955 "strip_size_kb": 64, 00:09:59.955 "state": "configuring", 00:09:59.955 "raid_level": "concat", 00:09:59.955 "superblock": false, 00:09:59.955 "num_base_bdevs": 4, 00:09:59.955 "num_base_bdevs_discovered": 1, 00:09:59.955 "num_base_bdevs_operational": 4, 00:09:59.955 "base_bdevs_list": [ 00:09:59.955 { 00:09:59.955 "name": "BaseBdev1", 00:09:59.955 "uuid": "930953c9-d6a0-4488-897e-d5f2353feb68", 00:09:59.955 "is_configured": true, 00:09:59.955 "data_offset": 0, 00:09:59.955 "data_size": 65536 00:09:59.955 }, 00:09:59.955 { 00:09:59.955 "name": "BaseBdev2", 00:09:59.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.955 "is_configured": false, 00:09:59.955 "data_offset": 0, 00:09:59.955 "data_size": 0 00:09:59.955 }, 00:09:59.955 { 00:09:59.955 "name": "BaseBdev3", 00:09:59.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.955 "is_configured": false, 00:09:59.955 "data_offset": 0, 00:09:59.955 "data_size": 0 00:09:59.955 }, 00:09:59.955 { 00:09:59.955 "name": "BaseBdev4", 00:09:59.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.955 "is_configured": false, 00:09:59.955 "data_offset": 0, 00:09:59.955 "data_size": 0 00:09:59.955 } 00:09:59.955 ] 00:09:59.955 }' 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.955 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.215 [2024-11-26 12:53:17.831618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.215 BaseBdev2 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.215 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.215 [ 00:10:00.215 { 00:10:00.216 "name": "BaseBdev2", 00:10:00.216 "aliases": [ 00:10:00.216 "76efba3f-8bd0-4e50-b161-e4c3e3f24e31" 00:10:00.216 ], 00:10:00.216 "product_name": "Malloc disk", 00:10:00.216 "block_size": 512, 00:10:00.216 "num_blocks": 65536, 00:10:00.216 "uuid": "76efba3f-8bd0-4e50-b161-e4c3e3f24e31", 00:10:00.216 "assigned_rate_limits": { 00:10:00.216 "rw_ios_per_sec": 0, 00:10:00.216 "rw_mbytes_per_sec": 0, 00:10:00.216 "r_mbytes_per_sec": 0, 00:10:00.216 "w_mbytes_per_sec": 0 00:10:00.216 }, 00:10:00.216 "claimed": true, 00:10:00.216 "claim_type": "exclusive_write", 00:10:00.216 "zoned": false, 00:10:00.216 "supported_io_types": { 00:10:00.216 "read": true, 00:10:00.216 "write": true, 00:10:00.216 "unmap": true, 00:10:00.216 "flush": true, 00:10:00.216 "reset": true, 00:10:00.216 "nvme_admin": false, 00:10:00.216 "nvme_io": false, 00:10:00.216 "nvme_io_md": false, 00:10:00.216 "write_zeroes": true, 00:10:00.216 "zcopy": true, 00:10:00.216 "get_zone_info": false, 00:10:00.216 "zone_management": false, 00:10:00.216 "zone_append": false, 00:10:00.216 "compare": false, 00:10:00.216 "compare_and_write": false, 00:10:00.216 "abort": true, 00:10:00.216 "seek_hole": false, 00:10:00.216 "seek_data": false, 00:10:00.216 "copy": true, 00:10:00.216 "nvme_iov_md": false 00:10:00.216 }, 00:10:00.216 "memory_domains": [ 00:10:00.216 { 00:10:00.216 "dma_device_id": "system", 00:10:00.216 "dma_device_type": 1 00:10:00.216 }, 00:10:00.216 { 00:10:00.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.216 "dma_device_type": 2 00:10:00.216 } 00:10:00.216 ], 00:10:00.216 "driver_specific": {} 00:10:00.216 } 00:10:00.216 ] 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.216 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.476 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.476 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.477 "name": "Existed_Raid", 00:10:00.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.477 "strip_size_kb": 64, 00:10:00.477 "state": "configuring", 00:10:00.477 "raid_level": "concat", 00:10:00.477 "superblock": false, 00:10:00.477 "num_base_bdevs": 4, 00:10:00.477 "num_base_bdevs_discovered": 2, 00:10:00.477 "num_base_bdevs_operational": 4, 00:10:00.477 "base_bdevs_list": [ 00:10:00.477 { 00:10:00.477 "name": "BaseBdev1", 00:10:00.477 "uuid": "930953c9-d6a0-4488-897e-d5f2353feb68", 00:10:00.477 "is_configured": true, 00:10:00.477 "data_offset": 0, 00:10:00.477 "data_size": 65536 00:10:00.477 }, 00:10:00.477 { 00:10:00.477 "name": "BaseBdev2", 00:10:00.477 "uuid": "76efba3f-8bd0-4e50-b161-e4c3e3f24e31", 00:10:00.477 "is_configured": true, 00:10:00.477 "data_offset": 0, 00:10:00.477 "data_size": 65536 00:10:00.477 }, 00:10:00.477 { 00:10:00.477 "name": "BaseBdev3", 00:10:00.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.477 "is_configured": false, 00:10:00.477 "data_offset": 0, 00:10:00.477 "data_size": 0 00:10:00.477 }, 00:10:00.477 { 00:10:00.477 "name": "BaseBdev4", 00:10:00.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.477 "is_configured": false, 00:10:00.477 "data_offset": 0, 00:10:00.477 "data_size": 0 00:10:00.477 } 00:10:00.477 ] 00:10:00.477 }' 00:10:00.477 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.477 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.737 [2024-11-26 12:53:18.269834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.737 BaseBdev3 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.737 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.737 [ 00:10:00.737 { 00:10:00.737 "name": "BaseBdev3", 00:10:00.737 "aliases": [ 00:10:00.737 "b0bec4a3-4b93-4af0-97b9-db433b598667" 00:10:00.737 ], 00:10:00.737 "product_name": "Malloc disk", 00:10:00.737 "block_size": 512, 00:10:00.737 "num_blocks": 65536, 00:10:00.737 "uuid": "b0bec4a3-4b93-4af0-97b9-db433b598667", 00:10:00.737 "assigned_rate_limits": { 00:10:00.737 "rw_ios_per_sec": 0, 00:10:00.737 "rw_mbytes_per_sec": 0, 00:10:00.737 "r_mbytes_per_sec": 0, 00:10:00.737 "w_mbytes_per_sec": 0 00:10:00.737 }, 00:10:00.737 "claimed": true, 00:10:00.737 "claim_type": "exclusive_write", 00:10:00.737 "zoned": false, 00:10:00.737 "supported_io_types": { 00:10:00.737 "read": true, 00:10:00.737 "write": true, 00:10:00.737 "unmap": true, 00:10:00.737 "flush": true, 00:10:00.737 "reset": true, 00:10:00.737 "nvme_admin": false, 00:10:00.737 "nvme_io": false, 00:10:00.737 "nvme_io_md": false, 00:10:00.737 "write_zeroes": true, 00:10:00.737 "zcopy": true, 00:10:00.737 "get_zone_info": false, 00:10:00.737 "zone_management": false, 00:10:00.737 "zone_append": false, 00:10:00.737 "compare": false, 00:10:00.737 "compare_and_write": false, 00:10:00.737 "abort": true, 00:10:00.737 "seek_hole": false, 00:10:00.737 "seek_data": false, 00:10:00.737 "copy": true, 00:10:00.737 "nvme_iov_md": false 00:10:00.737 }, 00:10:00.737 "memory_domains": [ 00:10:00.737 { 00:10:00.737 "dma_device_id": "system", 00:10:00.738 "dma_device_type": 1 00:10:00.738 }, 00:10:00.738 { 00:10:00.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.738 "dma_device_type": 2 00:10:00.738 } 00:10:00.738 ], 00:10:00.738 "driver_specific": {} 00:10:00.738 } 00:10:00.738 ] 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.738 "name": "Existed_Raid", 00:10:00.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.738 "strip_size_kb": 64, 00:10:00.738 "state": "configuring", 00:10:00.738 "raid_level": "concat", 00:10:00.738 "superblock": false, 00:10:00.738 "num_base_bdevs": 4, 00:10:00.738 "num_base_bdevs_discovered": 3, 00:10:00.738 "num_base_bdevs_operational": 4, 00:10:00.738 "base_bdevs_list": [ 00:10:00.738 { 00:10:00.738 "name": "BaseBdev1", 00:10:00.738 "uuid": "930953c9-d6a0-4488-897e-d5f2353feb68", 00:10:00.738 "is_configured": true, 00:10:00.738 "data_offset": 0, 00:10:00.738 "data_size": 65536 00:10:00.738 }, 00:10:00.738 { 00:10:00.738 "name": "BaseBdev2", 00:10:00.738 "uuid": "76efba3f-8bd0-4e50-b161-e4c3e3f24e31", 00:10:00.738 "is_configured": true, 00:10:00.738 "data_offset": 0, 00:10:00.738 "data_size": 65536 00:10:00.738 }, 00:10:00.738 { 00:10:00.738 "name": "BaseBdev3", 00:10:00.738 "uuid": "b0bec4a3-4b93-4af0-97b9-db433b598667", 00:10:00.738 "is_configured": true, 00:10:00.738 "data_offset": 0, 00:10:00.738 "data_size": 65536 00:10:00.738 }, 00:10:00.738 { 00:10:00.738 "name": "BaseBdev4", 00:10:00.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.738 "is_configured": false, 00:10:00.738 "data_offset": 0, 00:10:00.738 "data_size": 0 00:10:00.738 } 00:10:00.738 ] 00:10:00.738 }' 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.738 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.308 [2024-11-26 12:53:18.696130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:01.308 [2024-11-26 12:53:18.696251] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:01.308 [2024-11-26 12:53:18.696274] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:01.308 [2024-11-26 12:53:18.696552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:01.308 [2024-11-26 12:53:18.696682] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:01.308 [2024-11-26 12:53:18.696694] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:01.308 [2024-11-26 12:53:18.696883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.308 BaseBdev4 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.308 [ 00:10:01.308 { 00:10:01.308 "name": "BaseBdev4", 00:10:01.308 "aliases": [ 00:10:01.308 "4933431c-5d40-4c9a-8a0c-f0ba6a03f3c3" 00:10:01.308 ], 00:10:01.308 "product_name": "Malloc disk", 00:10:01.308 "block_size": 512, 00:10:01.308 "num_blocks": 65536, 00:10:01.308 "uuid": "4933431c-5d40-4c9a-8a0c-f0ba6a03f3c3", 00:10:01.308 "assigned_rate_limits": { 00:10:01.308 "rw_ios_per_sec": 0, 00:10:01.308 "rw_mbytes_per_sec": 0, 00:10:01.308 "r_mbytes_per_sec": 0, 00:10:01.308 "w_mbytes_per_sec": 0 00:10:01.308 }, 00:10:01.308 "claimed": true, 00:10:01.308 "claim_type": "exclusive_write", 00:10:01.308 "zoned": false, 00:10:01.308 "supported_io_types": { 00:10:01.308 "read": true, 00:10:01.308 "write": true, 00:10:01.308 "unmap": true, 00:10:01.308 "flush": true, 00:10:01.308 "reset": true, 00:10:01.308 "nvme_admin": false, 00:10:01.308 "nvme_io": false, 00:10:01.308 "nvme_io_md": false, 00:10:01.308 "write_zeroes": true, 00:10:01.308 "zcopy": true, 00:10:01.308 "get_zone_info": false, 00:10:01.308 "zone_management": false, 00:10:01.308 "zone_append": false, 00:10:01.308 "compare": false, 00:10:01.308 "compare_and_write": false, 00:10:01.308 "abort": true, 00:10:01.308 "seek_hole": false, 00:10:01.308 "seek_data": false, 00:10:01.308 "copy": true, 00:10:01.308 "nvme_iov_md": false 00:10:01.308 }, 00:10:01.308 "memory_domains": [ 00:10:01.308 { 00:10:01.308 "dma_device_id": "system", 00:10:01.308 "dma_device_type": 1 00:10:01.308 }, 00:10:01.308 { 00:10:01.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.308 "dma_device_type": 2 00:10:01.308 } 00:10:01.308 ], 00:10:01.308 "driver_specific": {} 00:10:01.308 } 00:10:01.308 ] 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.308 "name": "Existed_Raid", 00:10:01.308 "uuid": "8ac0d0b9-4d14-4a99-b104-5ee39f1e76b1", 00:10:01.308 "strip_size_kb": 64, 00:10:01.308 "state": "online", 00:10:01.308 "raid_level": "concat", 00:10:01.308 "superblock": false, 00:10:01.308 "num_base_bdevs": 4, 00:10:01.308 "num_base_bdevs_discovered": 4, 00:10:01.308 "num_base_bdevs_operational": 4, 00:10:01.308 "base_bdevs_list": [ 00:10:01.308 { 00:10:01.308 "name": "BaseBdev1", 00:10:01.308 "uuid": "930953c9-d6a0-4488-897e-d5f2353feb68", 00:10:01.308 "is_configured": true, 00:10:01.308 "data_offset": 0, 00:10:01.308 "data_size": 65536 00:10:01.308 }, 00:10:01.308 { 00:10:01.308 "name": "BaseBdev2", 00:10:01.308 "uuid": "76efba3f-8bd0-4e50-b161-e4c3e3f24e31", 00:10:01.308 "is_configured": true, 00:10:01.308 "data_offset": 0, 00:10:01.308 "data_size": 65536 00:10:01.308 }, 00:10:01.308 { 00:10:01.308 "name": "BaseBdev3", 00:10:01.308 "uuid": "b0bec4a3-4b93-4af0-97b9-db433b598667", 00:10:01.308 "is_configured": true, 00:10:01.308 "data_offset": 0, 00:10:01.308 "data_size": 65536 00:10:01.308 }, 00:10:01.308 { 00:10:01.308 "name": "BaseBdev4", 00:10:01.308 "uuid": "4933431c-5d40-4c9a-8a0c-f0ba6a03f3c3", 00:10:01.308 "is_configured": true, 00:10:01.308 "data_offset": 0, 00:10:01.308 "data_size": 65536 00:10:01.308 } 00:10:01.308 ] 00:10:01.308 }' 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.308 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.568 [2024-11-26 12:53:19.183675] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.568 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.568 "name": "Existed_Raid", 00:10:01.568 "aliases": [ 00:10:01.568 "8ac0d0b9-4d14-4a99-b104-5ee39f1e76b1" 00:10:01.568 ], 00:10:01.568 "product_name": "Raid Volume", 00:10:01.568 "block_size": 512, 00:10:01.568 "num_blocks": 262144, 00:10:01.568 "uuid": "8ac0d0b9-4d14-4a99-b104-5ee39f1e76b1", 00:10:01.568 "assigned_rate_limits": { 00:10:01.568 "rw_ios_per_sec": 0, 00:10:01.568 "rw_mbytes_per_sec": 0, 00:10:01.568 "r_mbytes_per_sec": 0, 00:10:01.568 "w_mbytes_per_sec": 0 00:10:01.568 }, 00:10:01.568 "claimed": false, 00:10:01.568 "zoned": false, 00:10:01.568 "supported_io_types": { 00:10:01.568 "read": true, 00:10:01.568 "write": true, 00:10:01.568 "unmap": true, 00:10:01.568 "flush": true, 00:10:01.568 "reset": true, 00:10:01.568 "nvme_admin": false, 00:10:01.568 "nvme_io": false, 00:10:01.568 "nvme_io_md": false, 00:10:01.568 "write_zeroes": true, 00:10:01.568 "zcopy": false, 00:10:01.568 "get_zone_info": false, 00:10:01.568 "zone_management": false, 00:10:01.568 "zone_append": false, 00:10:01.568 "compare": false, 00:10:01.568 "compare_and_write": false, 00:10:01.568 "abort": false, 00:10:01.568 "seek_hole": false, 00:10:01.568 "seek_data": false, 00:10:01.568 "copy": false, 00:10:01.568 "nvme_iov_md": false 00:10:01.568 }, 00:10:01.568 "memory_domains": [ 00:10:01.568 { 00:10:01.568 "dma_device_id": "system", 00:10:01.568 "dma_device_type": 1 00:10:01.568 }, 00:10:01.568 { 00:10:01.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.568 "dma_device_type": 2 00:10:01.569 }, 00:10:01.569 { 00:10:01.569 "dma_device_id": "system", 00:10:01.569 "dma_device_type": 1 00:10:01.569 }, 00:10:01.569 { 00:10:01.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.569 "dma_device_type": 2 00:10:01.569 }, 00:10:01.569 { 00:10:01.569 "dma_device_id": "system", 00:10:01.569 "dma_device_type": 1 00:10:01.569 }, 00:10:01.569 { 00:10:01.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.569 "dma_device_type": 2 00:10:01.569 }, 00:10:01.569 { 00:10:01.569 "dma_device_id": "system", 00:10:01.569 "dma_device_type": 1 00:10:01.569 }, 00:10:01.569 { 00:10:01.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.569 "dma_device_type": 2 00:10:01.569 } 00:10:01.569 ], 00:10:01.569 "driver_specific": { 00:10:01.569 "raid": { 00:10:01.569 "uuid": "8ac0d0b9-4d14-4a99-b104-5ee39f1e76b1", 00:10:01.569 "strip_size_kb": 64, 00:10:01.569 "state": "online", 00:10:01.569 "raid_level": "concat", 00:10:01.569 "superblock": false, 00:10:01.569 "num_base_bdevs": 4, 00:10:01.569 "num_base_bdevs_discovered": 4, 00:10:01.569 "num_base_bdevs_operational": 4, 00:10:01.569 "base_bdevs_list": [ 00:10:01.569 { 00:10:01.569 "name": "BaseBdev1", 00:10:01.569 "uuid": "930953c9-d6a0-4488-897e-d5f2353feb68", 00:10:01.569 "is_configured": true, 00:10:01.569 "data_offset": 0, 00:10:01.569 "data_size": 65536 00:10:01.569 }, 00:10:01.569 { 00:10:01.569 "name": "BaseBdev2", 00:10:01.569 "uuid": "76efba3f-8bd0-4e50-b161-e4c3e3f24e31", 00:10:01.569 "is_configured": true, 00:10:01.569 "data_offset": 0, 00:10:01.569 "data_size": 65536 00:10:01.569 }, 00:10:01.569 { 00:10:01.569 "name": "BaseBdev3", 00:10:01.569 "uuid": "b0bec4a3-4b93-4af0-97b9-db433b598667", 00:10:01.569 "is_configured": true, 00:10:01.569 "data_offset": 0, 00:10:01.569 "data_size": 65536 00:10:01.569 }, 00:10:01.569 { 00:10:01.569 "name": "BaseBdev4", 00:10:01.569 "uuid": "4933431c-5d40-4c9a-8a0c-f0ba6a03f3c3", 00:10:01.569 "is_configured": true, 00:10:01.569 "data_offset": 0, 00:10:01.569 "data_size": 65536 00:10:01.569 } 00:10:01.569 ] 00:10:01.569 } 00:10:01.569 } 00:10:01.569 }' 00:10:01.569 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.829 BaseBdev2 00:10:01.829 BaseBdev3 00:10:01.829 BaseBdev4' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.829 [2024-11-26 12:53:19.462915] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.829 [2024-11-26 12:53:19.462943] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.829 [2024-11-26 12:53:19.462993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.829 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.089 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.089 "name": "Existed_Raid", 00:10:02.089 "uuid": "8ac0d0b9-4d14-4a99-b104-5ee39f1e76b1", 00:10:02.089 "strip_size_kb": 64, 00:10:02.089 "state": "offline", 00:10:02.089 "raid_level": "concat", 00:10:02.089 "superblock": false, 00:10:02.089 "num_base_bdevs": 4, 00:10:02.089 "num_base_bdevs_discovered": 3, 00:10:02.089 "num_base_bdevs_operational": 3, 00:10:02.089 "base_bdevs_list": [ 00:10:02.089 { 00:10:02.089 "name": null, 00:10:02.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.089 "is_configured": false, 00:10:02.089 "data_offset": 0, 00:10:02.089 "data_size": 65536 00:10:02.089 }, 00:10:02.089 { 00:10:02.089 "name": "BaseBdev2", 00:10:02.089 "uuid": "76efba3f-8bd0-4e50-b161-e4c3e3f24e31", 00:10:02.089 "is_configured": true, 00:10:02.089 "data_offset": 0, 00:10:02.089 "data_size": 65536 00:10:02.089 }, 00:10:02.089 { 00:10:02.089 "name": "BaseBdev3", 00:10:02.089 "uuid": "b0bec4a3-4b93-4af0-97b9-db433b598667", 00:10:02.089 "is_configured": true, 00:10:02.089 "data_offset": 0, 00:10:02.089 "data_size": 65536 00:10:02.089 }, 00:10:02.089 { 00:10:02.089 "name": "BaseBdev4", 00:10:02.089 "uuid": "4933431c-5d40-4c9a-8a0c-f0ba6a03f3c3", 00:10:02.089 "is_configured": true, 00:10:02.089 "data_offset": 0, 00:10:02.089 "data_size": 65536 00:10:02.089 } 00:10:02.089 ] 00:10:02.089 }' 00:10:02.089 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.089 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 [2024-11-26 12:53:19.893232] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 [2024-11-26 12:53:19.964266] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.350 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.612 [2024-11-26 12:53:20.035436] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:02.612 [2024-11-26 12:53:20.035535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.612 BaseBdev2 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.612 [ 00:10:02.612 { 00:10:02.612 "name": "BaseBdev2", 00:10:02.612 "aliases": [ 00:10:02.612 "37325f42-b0dc-48b9-88bc-cc5a26594e0a" 00:10:02.612 ], 00:10:02.612 "product_name": "Malloc disk", 00:10:02.612 "block_size": 512, 00:10:02.612 "num_blocks": 65536, 00:10:02.612 "uuid": "37325f42-b0dc-48b9-88bc-cc5a26594e0a", 00:10:02.612 "assigned_rate_limits": { 00:10:02.612 "rw_ios_per_sec": 0, 00:10:02.612 "rw_mbytes_per_sec": 0, 00:10:02.612 "r_mbytes_per_sec": 0, 00:10:02.612 "w_mbytes_per_sec": 0 00:10:02.612 }, 00:10:02.612 "claimed": false, 00:10:02.612 "zoned": false, 00:10:02.612 "supported_io_types": { 00:10:02.612 "read": true, 00:10:02.612 "write": true, 00:10:02.612 "unmap": true, 00:10:02.612 "flush": true, 00:10:02.612 "reset": true, 00:10:02.612 "nvme_admin": false, 00:10:02.612 "nvme_io": false, 00:10:02.612 "nvme_io_md": false, 00:10:02.612 "write_zeroes": true, 00:10:02.612 "zcopy": true, 00:10:02.612 "get_zone_info": false, 00:10:02.612 "zone_management": false, 00:10:02.612 "zone_append": false, 00:10:02.612 "compare": false, 00:10:02.612 "compare_and_write": false, 00:10:02.612 "abort": true, 00:10:02.612 "seek_hole": false, 00:10:02.612 "seek_data": false, 00:10:02.612 "copy": true, 00:10:02.612 "nvme_iov_md": false 00:10:02.612 }, 00:10:02.612 "memory_domains": [ 00:10:02.612 { 00:10:02.612 "dma_device_id": "system", 00:10:02.612 "dma_device_type": 1 00:10:02.612 }, 00:10:02.612 { 00:10:02.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.612 "dma_device_type": 2 00:10:02.612 } 00:10:02.612 ], 00:10:02.612 "driver_specific": {} 00:10:02.612 } 00:10:02.612 ] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.612 BaseBdev3 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.612 [ 00:10:02.612 { 00:10:02.612 "name": "BaseBdev3", 00:10:02.612 "aliases": [ 00:10:02.612 "3518266a-6ab3-440e-803a-102bf3160fe5" 00:10:02.612 ], 00:10:02.612 "product_name": "Malloc disk", 00:10:02.612 "block_size": 512, 00:10:02.612 "num_blocks": 65536, 00:10:02.612 "uuid": "3518266a-6ab3-440e-803a-102bf3160fe5", 00:10:02.612 "assigned_rate_limits": { 00:10:02.612 "rw_ios_per_sec": 0, 00:10:02.612 "rw_mbytes_per_sec": 0, 00:10:02.612 "r_mbytes_per_sec": 0, 00:10:02.612 "w_mbytes_per_sec": 0 00:10:02.612 }, 00:10:02.612 "claimed": false, 00:10:02.612 "zoned": false, 00:10:02.612 "supported_io_types": { 00:10:02.612 "read": true, 00:10:02.612 "write": true, 00:10:02.612 "unmap": true, 00:10:02.612 "flush": true, 00:10:02.612 "reset": true, 00:10:02.612 "nvme_admin": false, 00:10:02.612 "nvme_io": false, 00:10:02.612 "nvme_io_md": false, 00:10:02.612 "write_zeroes": true, 00:10:02.612 "zcopy": true, 00:10:02.612 "get_zone_info": false, 00:10:02.612 "zone_management": false, 00:10:02.612 "zone_append": false, 00:10:02.612 "compare": false, 00:10:02.612 "compare_and_write": false, 00:10:02.612 "abort": true, 00:10:02.612 "seek_hole": false, 00:10:02.612 "seek_data": false, 00:10:02.612 "copy": true, 00:10:02.612 "nvme_iov_md": false 00:10:02.612 }, 00:10:02.612 "memory_domains": [ 00:10:02.612 { 00:10:02.612 "dma_device_id": "system", 00:10:02.612 "dma_device_type": 1 00:10:02.612 }, 00:10:02.612 { 00:10:02.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.612 "dma_device_type": 2 00:10:02.612 } 00:10:02.612 ], 00:10:02.612 "driver_specific": {} 00:10:02.612 } 00:10:02.612 ] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.612 BaseBdev4 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.612 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.613 [ 00:10:02.613 { 00:10:02.613 "name": "BaseBdev4", 00:10:02.613 "aliases": [ 00:10:02.613 "e35a8f78-5184-4b63-b932-3448ffc65440" 00:10:02.613 ], 00:10:02.613 "product_name": "Malloc disk", 00:10:02.613 "block_size": 512, 00:10:02.613 "num_blocks": 65536, 00:10:02.613 "uuid": "e35a8f78-5184-4b63-b932-3448ffc65440", 00:10:02.613 "assigned_rate_limits": { 00:10:02.613 "rw_ios_per_sec": 0, 00:10:02.613 "rw_mbytes_per_sec": 0, 00:10:02.613 "r_mbytes_per_sec": 0, 00:10:02.613 "w_mbytes_per_sec": 0 00:10:02.613 }, 00:10:02.613 "claimed": false, 00:10:02.613 "zoned": false, 00:10:02.613 "supported_io_types": { 00:10:02.613 "read": true, 00:10:02.613 "write": true, 00:10:02.613 "unmap": true, 00:10:02.613 "flush": true, 00:10:02.613 "reset": true, 00:10:02.613 "nvme_admin": false, 00:10:02.613 "nvme_io": false, 00:10:02.613 "nvme_io_md": false, 00:10:02.613 "write_zeroes": true, 00:10:02.613 "zcopy": true, 00:10:02.613 "get_zone_info": false, 00:10:02.613 "zone_management": false, 00:10:02.613 "zone_append": false, 00:10:02.613 "compare": false, 00:10:02.613 "compare_and_write": false, 00:10:02.613 "abort": true, 00:10:02.613 "seek_hole": false, 00:10:02.613 "seek_data": false, 00:10:02.613 "copy": true, 00:10:02.613 "nvme_iov_md": false 00:10:02.613 }, 00:10:02.613 "memory_domains": [ 00:10:02.613 { 00:10:02.613 "dma_device_id": "system", 00:10:02.613 "dma_device_type": 1 00:10:02.613 }, 00:10:02.613 { 00:10:02.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.613 "dma_device_type": 2 00:10:02.613 } 00:10:02.613 ], 00:10:02.613 "driver_specific": {} 00:10:02.613 } 00:10:02.613 ] 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.613 [2024-11-26 12:53:20.262790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.613 [2024-11-26 12:53:20.262872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.613 [2024-11-26 12:53:20.262912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.613 [2024-11-26 12:53:20.264718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.613 [2024-11-26 12:53:20.264802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.613 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.874 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.874 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.874 "name": "Existed_Raid", 00:10:02.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.874 "strip_size_kb": 64, 00:10:02.874 "state": "configuring", 00:10:02.874 "raid_level": "concat", 00:10:02.874 "superblock": false, 00:10:02.874 "num_base_bdevs": 4, 00:10:02.874 "num_base_bdevs_discovered": 3, 00:10:02.874 "num_base_bdevs_operational": 4, 00:10:02.874 "base_bdevs_list": [ 00:10:02.874 { 00:10:02.874 "name": "BaseBdev1", 00:10:02.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.874 "is_configured": false, 00:10:02.874 "data_offset": 0, 00:10:02.874 "data_size": 0 00:10:02.874 }, 00:10:02.874 { 00:10:02.874 "name": "BaseBdev2", 00:10:02.874 "uuid": "37325f42-b0dc-48b9-88bc-cc5a26594e0a", 00:10:02.874 "is_configured": true, 00:10:02.874 "data_offset": 0, 00:10:02.874 "data_size": 65536 00:10:02.874 }, 00:10:02.874 { 00:10:02.874 "name": "BaseBdev3", 00:10:02.874 "uuid": "3518266a-6ab3-440e-803a-102bf3160fe5", 00:10:02.874 "is_configured": true, 00:10:02.874 "data_offset": 0, 00:10:02.874 "data_size": 65536 00:10:02.874 }, 00:10:02.874 { 00:10:02.874 "name": "BaseBdev4", 00:10:02.874 "uuid": "e35a8f78-5184-4b63-b932-3448ffc65440", 00:10:02.874 "is_configured": true, 00:10:02.874 "data_offset": 0, 00:10:02.874 "data_size": 65536 00:10:02.874 } 00:10:02.874 ] 00:10:02.874 }' 00:10:02.874 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.874 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.134 [2024-11-26 12:53:20.706005] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.134 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.134 "name": "Existed_Raid", 00:10:03.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.134 "strip_size_kb": 64, 00:10:03.134 "state": "configuring", 00:10:03.134 "raid_level": "concat", 00:10:03.134 "superblock": false, 00:10:03.134 "num_base_bdevs": 4, 00:10:03.134 "num_base_bdevs_discovered": 2, 00:10:03.134 "num_base_bdevs_operational": 4, 00:10:03.134 "base_bdevs_list": [ 00:10:03.134 { 00:10:03.134 "name": "BaseBdev1", 00:10:03.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.134 "is_configured": false, 00:10:03.134 "data_offset": 0, 00:10:03.134 "data_size": 0 00:10:03.134 }, 00:10:03.134 { 00:10:03.134 "name": null, 00:10:03.134 "uuid": "37325f42-b0dc-48b9-88bc-cc5a26594e0a", 00:10:03.134 "is_configured": false, 00:10:03.134 "data_offset": 0, 00:10:03.134 "data_size": 65536 00:10:03.134 }, 00:10:03.134 { 00:10:03.134 "name": "BaseBdev3", 00:10:03.134 "uuid": "3518266a-6ab3-440e-803a-102bf3160fe5", 00:10:03.135 "is_configured": true, 00:10:03.135 "data_offset": 0, 00:10:03.135 "data_size": 65536 00:10:03.135 }, 00:10:03.135 { 00:10:03.135 "name": "BaseBdev4", 00:10:03.135 "uuid": "e35a8f78-5184-4b63-b932-3448ffc65440", 00:10:03.135 "is_configured": true, 00:10:03.135 "data_offset": 0, 00:10:03.135 "data_size": 65536 00:10:03.135 } 00:10:03.135 ] 00:10:03.135 }' 00:10:03.135 12:53:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.135 12:53:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.705 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.705 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.705 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.705 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.705 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.705 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:03.705 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.705 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.706 [2024-11-26 12:53:21.180200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.706 BaseBdev1 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.706 [ 00:10:03.706 { 00:10:03.706 "name": "BaseBdev1", 00:10:03.706 "aliases": [ 00:10:03.706 "0f6e2e62-1036-45e1-a7ae-3e98017aaac8" 00:10:03.706 ], 00:10:03.706 "product_name": "Malloc disk", 00:10:03.706 "block_size": 512, 00:10:03.706 "num_blocks": 65536, 00:10:03.706 "uuid": "0f6e2e62-1036-45e1-a7ae-3e98017aaac8", 00:10:03.706 "assigned_rate_limits": { 00:10:03.706 "rw_ios_per_sec": 0, 00:10:03.706 "rw_mbytes_per_sec": 0, 00:10:03.706 "r_mbytes_per_sec": 0, 00:10:03.706 "w_mbytes_per_sec": 0 00:10:03.706 }, 00:10:03.706 "claimed": true, 00:10:03.706 "claim_type": "exclusive_write", 00:10:03.706 "zoned": false, 00:10:03.706 "supported_io_types": { 00:10:03.706 "read": true, 00:10:03.706 "write": true, 00:10:03.706 "unmap": true, 00:10:03.706 "flush": true, 00:10:03.706 "reset": true, 00:10:03.706 "nvme_admin": false, 00:10:03.706 "nvme_io": false, 00:10:03.706 "nvme_io_md": false, 00:10:03.706 "write_zeroes": true, 00:10:03.706 "zcopy": true, 00:10:03.706 "get_zone_info": false, 00:10:03.706 "zone_management": false, 00:10:03.706 "zone_append": false, 00:10:03.706 "compare": false, 00:10:03.706 "compare_and_write": false, 00:10:03.706 "abort": true, 00:10:03.706 "seek_hole": false, 00:10:03.706 "seek_data": false, 00:10:03.706 "copy": true, 00:10:03.706 "nvme_iov_md": false 00:10:03.706 }, 00:10:03.706 "memory_domains": [ 00:10:03.706 { 00:10:03.706 "dma_device_id": "system", 00:10:03.706 "dma_device_type": 1 00:10:03.706 }, 00:10:03.706 { 00:10:03.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.706 "dma_device_type": 2 00:10:03.706 } 00:10:03.706 ], 00:10:03.706 "driver_specific": {} 00:10:03.706 } 00:10:03.706 ] 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.706 "name": "Existed_Raid", 00:10:03.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.706 "strip_size_kb": 64, 00:10:03.706 "state": "configuring", 00:10:03.706 "raid_level": "concat", 00:10:03.706 "superblock": false, 00:10:03.706 "num_base_bdevs": 4, 00:10:03.706 "num_base_bdevs_discovered": 3, 00:10:03.706 "num_base_bdevs_operational": 4, 00:10:03.706 "base_bdevs_list": [ 00:10:03.706 { 00:10:03.706 "name": "BaseBdev1", 00:10:03.706 "uuid": "0f6e2e62-1036-45e1-a7ae-3e98017aaac8", 00:10:03.706 "is_configured": true, 00:10:03.706 "data_offset": 0, 00:10:03.706 "data_size": 65536 00:10:03.706 }, 00:10:03.706 { 00:10:03.706 "name": null, 00:10:03.706 "uuid": "37325f42-b0dc-48b9-88bc-cc5a26594e0a", 00:10:03.706 "is_configured": false, 00:10:03.706 "data_offset": 0, 00:10:03.706 "data_size": 65536 00:10:03.706 }, 00:10:03.706 { 00:10:03.706 "name": "BaseBdev3", 00:10:03.706 "uuid": "3518266a-6ab3-440e-803a-102bf3160fe5", 00:10:03.706 "is_configured": true, 00:10:03.706 "data_offset": 0, 00:10:03.706 "data_size": 65536 00:10:03.706 }, 00:10:03.706 { 00:10:03.706 "name": "BaseBdev4", 00:10:03.706 "uuid": "e35a8f78-5184-4b63-b932-3448ffc65440", 00:10:03.706 "is_configured": true, 00:10:03.706 "data_offset": 0, 00:10:03.706 "data_size": 65536 00:10:03.706 } 00:10:03.706 ] 00:10:03.706 }' 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.706 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.295 [2024-11-26 12:53:21.723302] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.295 "name": "Existed_Raid", 00:10:04.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.295 "strip_size_kb": 64, 00:10:04.295 "state": "configuring", 00:10:04.295 "raid_level": "concat", 00:10:04.295 "superblock": false, 00:10:04.295 "num_base_bdevs": 4, 00:10:04.295 "num_base_bdevs_discovered": 2, 00:10:04.295 "num_base_bdevs_operational": 4, 00:10:04.295 "base_bdevs_list": [ 00:10:04.295 { 00:10:04.295 "name": "BaseBdev1", 00:10:04.295 "uuid": "0f6e2e62-1036-45e1-a7ae-3e98017aaac8", 00:10:04.295 "is_configured": true, 00:10:04.295 "data_offset": 0, 00:10:04.295 "data_size": 65536 00:10:04.295 }, 00:10:04.295 { 00:10:04.295 "name": null, 00:10:04.295 "uuid": "37325f42-b0dc-48b9-88bc-cc5a26594e0a", 00:10:04.295 "is_configured": false, 00:10:04.295 "data_offset": 0, 00:10:04.295 "data_size": 65536 00:10:04.295 }, 00:10:04.295 { 00:10:04.295 "name": null, 00:10:04.295 "uuid": "3518266a-6ab3-440e-803a-102bf3160fe5", 00:10:04.295 "is_configured": false, 00:10:04.295 "data_offset": 0, 00:10:04.295 "data_size": 65536 00:10:04.295 }, 00:10:04.295 { 00:10:04.295 "name": "BaseBdev4", 00:10:04.295 "uuid": "e35a8f78-5184-4b63-b932-3448ffc65440", 00:10:04.295 "is_configured": true, 00:10:04.295 "data_offset": 0, 00:10:04.295 "data_size": 65536 00:10:04.295 } 00:10:04.295 ] 00:10:04.295 }' 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.295 12:53:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.555 [2024-11-26 12:53:22.206515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.555 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.815 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.815 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.815 "name": "Existed_Raid", 00:10:04.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.815 "strip_size_kb": 64, 00:10:04.815 "state": "configuring", 00:10:04.815 "raid_level": "concat", 00:10:04.815 "superblock": false, 00:10:04.815 "num_base_bdevs": 4, 00:10:04.815 "num_base_bdevs_discovered": 3, 00:10:04.815 "num_base_bdevs_operational": 4, 00:10:04.815 "base_bdevs_list": [ 00:10:04.815 { 00:10:04.815 "name": "BaseBdev1", 00:10:04.815 "uuid": "0f6e2e62-1036-45e1-a7ae-3e98017aaac8", 00:10:04.815 "is_configured": true, 00:10:04.815 "data_offset": 0, 00:10:04.815 "data_size": 65536 00:10:04.815 }, 00:10:04.815 { 00:10:04.815 "name": null, 00:10:04.816 "uuid": "37325f42-b0dc-48b9-88bc-cc5a26594e0a", 00:10:04.816 "is_configured": false, 00:10:04.816 "data_offset": 0, 00:10:04.816 "data_size": 65536 00:10:04.816 }, 00:10:04.816 { 00:10:04.816 "name": "BaseBdev3", 00:10:04.816 "uuid": "3518266a-6ab3-440e-803a-102bf3160fe5", 00:10:04.816 "is_configured": true, 00:10:04.816 "data_offset": 0, 00:10:04.816 "data_size": 65536 00:10:04.816 }, 00:10:04.816 { 00:10:04.816 "name": "BaseBdev4", 00:10:04.816 "uuid": "e35a8f78-5184-4b63-b932-3448ffc65440", 00:10:04.816 "is_configured": true, 00:10:04.816 "data_offset": 0, 00:10:04.816 "data_size": 65536 00:10:04.816 } 00:10:04.816 ] 00:10:04.816 }' 00:10:04.816 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.816 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.076 [2024-11-26 12:53:22.661751] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.076 "name": "Existed_Raid", 00:10:05.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.076 "strip_size_kb": 64, 00:10:05.076 "state": "configuring", 00:10:05.076 "raid_level": "concat", 00:10:05.076 "superblock": false, 00:10:05.076 "num_base_bdevs": 4, 00:10:05.076 "num_base_bdevs_discovered": 2, 00:10:05.076 "num_base_bdevs_operational": 4, 00:10:05.076 "base_bdevs_list": [ 00:10:05.076 { 00:10:05.076 "name": null, 00:10:05.076 "uuid": "0f6e2e62-1036-45e1-a7ae-3e98017aaac8", 00:10:05.076 "is_configured": false, 00:10:05.076 "data_offset": 0, 00:10:05.076 "data_size": 65536 00:10:05.076 }, 00:10:05.076 { 00:10:05.076 "name": null, 00:10:05.076 "uuid": "37325f42-b0dc-48b9-88bc-cc5a26594e0a", 00:10:05.076 "is_configured": false, 00:10:05.076 "data_offset": 0, 00:10:05.076 "data_size": 65536 00:10:05.076 }, 00:10:05.076 { 00:10:05.076 "name": "BaseBdev3", 00:10:05.076 "uuid": "3518266a-6ab3-440e-803a-102bf3160fe5", 00:10:05.076 "is_configured": true, 00:10:05.076 "data_offset": 0, 00:10:05.076 "data_size": 65536 00:10:05.076 }, 00:10:05.076 { 00:10:05.076 "name": "BaseBdev4", 00:10:05.076 "uuid": "e35a8f78-5184-4b63-b932-3448ffc65440", 00:10:05.076 "is_configured": true, 00:10:05.076 "data_offset": 0, 00:10:05.076 "data_size": 65536 00:10:05.076 } 00:10:05.076 ] 00:10:05.076 }' 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.076 12:53:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.646 [2024-11-26 12:53:23.115247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.646 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.647 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.647 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.647 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.647 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.647 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.647 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.647 "name": "Existed_Raid", 00:10:05.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.647 "strip_size_kb": 64, 00:10:05.647 "state": "configuring", 00:10:05.647 "raid_level": "concat", 00:10:05.647 "superblock": false, 00:10:05.647 "num_base_bdevs": 4, 00:10:05.647 "num_base_bdevs_discovered": 3, 00:10:05.647 "num_base_bdevs_operational": 4, 00:10:05.647 "base_bdevs_list": [ 00:10:05.647 { 00:10:05.647 "name": null, 00:10:05.647 "uuid": "0f6e2e62-1036-45e1-a7ae-3e98017aaac8", 00:10:05.647 "is_configured": false, 00:10:05.647 "data_offset": 0, 00:10:05.647 "data_size": 65536 00:10:05.647 }, 00:10:05.647 { 00:10:05.647 "name": "BaseBdev2", 00:10:05.647 "uuid": "37325f42-b0dc-48b9-88bc-cc5a26594e0a", 00:10:05.647 "is_configured": true, 00:10:05.647 "data_offset": 0, 00:10:05.647 "data_size": 65536 00:10:05.647 }, 00:10:05.647 { 00:10:05.647 "name": "BaseBdev3", 00:10:05.647 "uuid": "3518266a-6ab3-440e-803a-102bf3160fe5", 00:10:05.647 "is_configured": true, 00:10:05.647 "data_offset": 0, 00:10:05.647 "data_size": 65536 00:10:05.647 }, 00:10:05.647 { 00:10:05.647 "name": "BaseBdev4", 00:10:05.647 "uuid": "e35a8f78-5184-4b63-b932-3448ffc65440", 00:10:05.647 "is_configured": true, 00:10:05.647 "data_offset": 0, 00:10:05.647 "data_size": 65536 00:10:05.647 } 00:10:05.647 ] 00:10:05.647 }' 00:10:05.647 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.647 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0f6e2e62-1036-45e1-a7ae-3e98017aaac8 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.906 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.166 [2024-11-26 12:53:23.597386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:06.166 [2024-11-26 12:53:23.597516] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:06.166 [2024-11-26 12:53:23.597541] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:06.166 [2024-11-26 12:53:23.597809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:06.166 [2024-11-26 12:53:23.597960] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:06.166 [2024-11-26 12:53:23.598002] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:06.166 [2024-11-26 12:53:23.598203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.166 NewBaseBdev 00:10:06.166 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.166 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.167 [ 00:10:06.167 { 00:10:06.167 "name": "NewBaseBdev", 00:10:06.167 "aliases": [ 00:10:06.167 "0f6e2e62-1036-45e1-a7ae-3e98017aaac8" 00:10:06.167 ], 00:10:06.167 "product_name": "Malloc disk", 00:10:06.167 "block_size": 512, 00:10:06.167 "num_blocks": 65536, 00:10:06.167 "uuid": "0f6e2e62-1036-45e1-a7ae-3e98017aaac8", 00:10:06.167 "assigned_rate_limits": { 00:10:06.167 "rw_ios_per_sec": 0, 00:10:06.167 "rw_mbytes_per_sec": 0, 00:10:06.167 "r_mbytes_per_sec": 0, 00:10:06.167 "w_mbytes_per_sec": 0 00:10:06.167 }, 00:10:06.167 "claimed": true, 00:10:06.167 "claim_type": "exclusive_write", 00:10:06.167 "zoned": false, 00:10:06.167 "supported_io_types": { 00:10:06.167 "read": true, 00:10:06.167 "write": true, 00:10:06.167 "unmap": true, 00:10:06.167 "flush": true, 00:10:06.167 "reset": true, 00:10:06.167 "nvme_admin": false, 00:10:06.167 "nvme_io": false, 00:10:06.167 "nvme_io_md": false, 00:10:06.167 "write_zeroes": true, 00:10:06.167 "zcopy": true, 00:10:06.167 "get_zone_info": false, 00:10:06.167 "zone_management": false, 00:10:06.167 "zone_append": false, 00:10:06.167 "compare": false, 00:10:06.167 "compare_and_write": false, 00:10:06.167 "abort": true, 00:10:06.167 "seek_hole": false, 00:10:06.167 "seek_data": false, 00:10:06.167 "copy": true, 00:10:06.167 "nvme_iov_md": false 00:10:06.167 }, 00:10:06.167 "memory_domains": [ 00:10:06.167 { 00:10:06.167 "dma_device_id": "system", 00:10:06.167 "dma_device_type": 1 00:10:06.167 }, 00:10:06.167 { 00:10:06.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.167 "dma_device_type": 2 00:10:06.167 } 00:10:06.167 ], 00:10:06.167 "driver_specific": {} 00:10:06.167 } 00:10:06.167 ] 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.167 "name": "Existed_Raid", 00:10:06.167 "uuid": "ae477aae-42d6-447b-b320-dc71c749cac3", 00:10:06.167 "strip_size_kb": 64, 00:10:06.167 "state": "online", 00:10:06.167 "raid_level": "concat", 00:10:06.167 "superblock": false, 00:10:06.167 "num_base_bdevs": 4, 00:10:06.167 "num_base_bdevs_discovered": 4, 00:10:06.167 "num_base_bdevs_operational": 4, 00:10:06.167 "base_bdevs_list": [ 00:10:06.167 { 00:10:06.167 "name": "NewBaseBdev", 00:10:06.167 "uuid": "0f6e2e62-1036-45e1-a7ae-3e98017aaac8", 00:10:06.167 "is_configured": true, 00:10:06.167 "data_offset": 0, 00:10:06.167 "data_size": 65536 00:10:06.167 }, 00:10:06.167 { 00:10:06.167 "name": "BaseBdev2", 00:10:06.167 "uuid": "37325f42-b0dc-48b9-88bc-cc5a26594e0a", 00:10:06.167 "is_configured": true, 00:10:06.167 "data_offset": 0, 00:10:06.167 "data_size": 65536 00:10:06.167 }, 00:10:06.167 { 00:10:06.167 "name": "BaseBdev3", 00:10:06.167 "uuid": "3518266a-6ab3-440e-803a-102bf3160fe5", 00:10:06.167 "is_configured": true, 00:10:06.167 "data_offset": 0, 00:10:06.167 "data_size": 65536 00:10:06.167 }, 00:10:06.167 { 00:10:06.167 "name": "BaseBdev4", 00:10:06.167 "uuid": "e35a8f78-5184-4b63-b932-3448ffc65440", 00:10:06.167 "is_configured": true, 00:10:06.167 "data_offset": 0, 00:10:06.167 "data_size": 65536 00:10:06.167 } 00:10:06.167 ] 00:10:06.167 }' 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.167 12:53:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.427 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.427 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.687 [2024-11-26 12:53:24.116843] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.687 "name": "Existed_Raid", 00:10:06.687 "aliases": [ 00:10:06.687 "ae477aae-42d6-447b-b320-dc71c749cac3" 00:10:06.687 ], 00:10:06.687 "product_name": "Raid Volume", 00:10:06.687 "block_size": 512, 00:10:06.687 "num_blocks": 262144, 00:10:06.687 "uuid": "ae477aae-42d6-447b-b320-dc71c749cac3", 00:10:06.687 "assigned_rate_limits": { 00:10:06.687 "rw_ios_per_sec": 0, 00:10:06.687 "rw_mbytes_per_sec": 0, 00:10:06.687 "r_mbytes_per_sec": 0, 00:10:06.687 "w_mbytes_per_sec": 0 00:10:06.687 }, 00:10:06.687 "claimed": false, 00:10:06.687 "zoned": false, 00:10:06.687 "supported_io_types": { 00:10:06.687 "read": true, 00:10:06.687 "write": true, 00:10:06.687 "unmap": true, 00:10:06.687 "flush": true, 00:10:06.687 "reset": true, 00:10:06.687 "nvme_admin": false, 00:10:06.687 "nvme_io": false, 00:10:06.687 "nvme_io_md": false, 00:10:06.687 "write_zeroes": true, 00:10:06.687 "zcopy": false, 00:10:06.687 "get_zone_info": false, 00:10:06.687 "zone_management": false, 00:10:06.687 "zone_append": false, 00:10:06.687 "compare": false, 00:10:06.687 "compare_and_write": false, 00:10:06.687 "abort": false, 00:10:06.687 "seek_hole": false, 00:10:06.687 "seek_data": false, 00:10:06.687 "copy": false, 00:10:06.687 "nvme_iov_md": false 00:10:06.687 }, 00:10:06.687 "memory_domains": [ 00:10:06.687 { 00:10:06.687 "dma_device_id": "system", 00:10:06.687 "dma_device_type": 1 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.687 "dma_device_type": 2 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "system", 00:10:06.687 "dma_device_type": 1 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.687 "dma_device_type": 2 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "system", 00:10:06.687 "dma_device_type": 1 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.687 "dma_device_type": 2 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "system", 00:10:06.687 "dma_device_type": 1 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.687 "dma_device_type": 2 00:10:06.687 } 00:10:06.687 ], 00:10:06.687 "driver_specific": { 00:10:06.687 "raid": { 00:10:06.687 "uuid": "ae477aae-42d6-447b-b320-dc71c749cac3", 00:10:06.687 "strip_size_kb": 64, 00:10:06.687 "state": "online", 00:10:06.687 "raid_level": "concat", 00:10:06.687 "superblock": false, 00:10:06.687 "num_base_bdevs": 4, 00:10:06.687 "num_base_bdevs_discovered": 4, 00:10:06.687 "num_base_bdevs_operational": 4, 00:10:06.687 "base_bdevs_list": [ 00:10:06.687 { 00:10:06.687 "name": "NewBaseBdev", 00:10:06.687 "uuid": "0f6e2e62-1036-45e1-a7ae-3e98017aaac8", 00:10:06.687 "is_configured": true, 00:10:06.687 "data_offset": 0, 00:10:06.687 "data_size": 65536 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "name": "BaseBdev2", 00:10:06.687 "uuid": "37325f42-b0dc-48b9-88bc-cc5a26594e0a", 00:10:06.687 "is_configured": true, 00:10:06.687 "data_offset": 0, 00:10:06.687 "data_size": 65536 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "name": "BaseBdev3", 00:10:06.687 "uuid": "3518266a-6ab3-440e-803a-102bf3160fe5", 00:10:06.687 "is_configured": true, 00:10:06.687 "data_offset": 0, 00:10:06.687 "data_size": 65536 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "name": "BaseBdev4", 00:10:06.687 "uuid": "e35a8f78-5184-4b63-b932-3448ffc65440", 00:10:06.687 "is_configured": true, 00:10:06.687 "data_offset": 0, 00:10:06.687 "data_size": 65536 00:10:06.687 } 00:10:06.687 ] 00:10:06.687 } 00:10:06.687 } 00:10:06.687 }' 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:06.687 BaseBdev2 00:10:06.687 BaseBdev3 00:10:06.687 BaseBdev4' 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.687 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.688 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.948 [2024-11-26 12:53:24.447947] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.948 [2024-11-26 12:53:24.447973] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.948 [2024-11-26 12:53:24.448042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.948 [2024-11-26 12:53:24.448107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.948 [2024-11-26 12:53:24.448121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82381 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82381 ']' 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82381 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82381 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82381' 00:10:06.948 killing process with pid 82381 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82381 00:10:06.948 [2024-11-26 12:53:24.496284] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.948 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82381 00:10:06.948 [2024-11-26 12:53:24.536727] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.209 ************************************ 00:10:07.209 END TEST raid_state_function_test 00:10:07.209 ************************************ 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:07.209 00:10:07.209 real 0m9.282s 00:10:07.209 user 0m15.843s 00:10:07.209 sys 0m1.958s 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.209 12:53:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:07.209 12:53:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:07.209 12:53:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.209 12:53:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.209 ************************************ 00:10:07.209 START TEST raid_state_function_test_sb 00:10:07.209 ************************************ 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:07.209 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83030 00:10:07.210 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:07.210 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83030' 00:10:07.210 Process raid pid: 83030 00:10:07.210 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83030 00:10:07.210 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83030 ']' 00:10:07.210 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.210 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.210 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.210 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.210 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.470 [2024-11-26 12:53:24.950157] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:07.470 [2024-11-26 12:53:24.950710] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.470 [2024-11-26 12:53:25.108420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.730 [2024-11-26 12:53:25.153576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.730 [2024-11-26 12:53:25.195533] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.730 [2024-11-26 12:53:25.195652] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.301 [2024-11-26 12:53:25.768740] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.301 [2024-11-26 12:53:25.768871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.301 [2024-11-26 12:53:25.768889] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.301 [2024-11-26 12:53:25.768899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.301 [2024-11-26 12:53:25.768905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.301 [2024-11-26 12:53:25.768916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.301 [2024-11-26 12:53:25.768922] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.301 [2024-11-26 12:53:25.768930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.301 "name": "Existed_Raid", 00:10:08.301 "uuid": "928016d1-a932-4db9-b27a-3cd087f6ab46", 00:10:08.301 "strip_size_kb": 64, 00:10:08.301 "state": "configuring", 00:10:08.301 "raid_level": "concat", 00:10:08.301 "superblock": true, 00:10:08.301 "num_base_bdevs": 4, 00:10:08.301 "num_base_bdevs_discovered": 0, 00:10:08.301 "num_base_bdevs_operational": 4, 00:10:08.301 "base_bdevs_list": [ 00:10:08.301 { 00:10:08.301 "name": "BaseBdev1", 00:10:08.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.301 "is_configured": false, 00:10:08.301 "data_offset": 0, 00:10:08.301 "data_size": 0 00:10:08.301 }, 00:10:08.301 { 00:10:08.301 "name": "BaseBdev2", 00:10:08.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.301 "is_configured": false, 00:10:08.301 "data_offset": 0, 00:10:08.301 "data_size": 0 00:10:08.301 }, 00:10:08.301 { 00:10:08.301 "name": "BaseBdev3", 00:10:08.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.301 "is_configured": false, 00:10:08.301 "data_offset": 0, 00:10:08.301 "data_size": 0 00:10:08.301 }, 00:10:08.301 { 00:10:08.301 "name": "BaseBdev4", 00:10:08.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.301 "is_configured": false, 00:10:08.301 "data_offset": 0, 00:10:08.301 "data_size": 0 00:10:08.301 } 00:10:08.301 ] 00:10:08.301 }' 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.301 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.561 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.561 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.561 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.561 [2024-11-26 12:53:26.231823] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.561 [2024-11-26 12:53:26.231918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:08.561 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.561 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.561 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.561 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 [2024-11-26 12:53:26.243847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.821 [2024-11-26 12:53:26.243935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.821 [2024-11-26 12:53:26.243961] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.821 [2024-11-26 12:53:26.243984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.821 [2024-11-26 12:53:26.244001] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.821 [2024-11-26 12:53:26.244021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.821 [2024-11-26 12:53:26.244039] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.821 [2024-11-26 12:53:26.244075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 [2024-11-26 12:53:26.264560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.821 BaseBdev1 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 [ 00:10:08.821 { 00:10:08.821 "name": "BaseBdev1", 00:10:08.821 "aliases": [ 00:10:08.821 "7d42f489-0f80-4066-96a6-a4d1e5dc2543" 00:10:08.821 ], 00:10:08.821 "product_name": "Malloc disk", 00:10:08.821 "block_size": 512, 00:10:08.821 "num_blocks": 65536, 00:10:08.821 "uuid": "7d42f489-0f80-4066-96a6-a4d1e5dc2543", 00:10:08.821 "assigned_rate_limits": { 00:10:08.821 "rw_ios_per_sec": 0, 00:10:08.821 "rw_mbytes_per_sec": 0, 00:10:08.821 "r_mbytes_per_sec": 0, 00:10:08.821 "w_mbytes_per_sec": 0 00:10:08.821 }, 00:10:08.821 "claimed": true, 00:10:08.821 "claim_type": "exclusive_write", 00:10:08.821 "zoned": false, 00:10:08.821 "supported_io_types": { 00:10:08.821 "read": true, 00:10:08.821 "write": true, 00:10:08.821 "unmap": true, 00:10:08.821 "flush": true, 00:10:08.821 "reset": true, 00:10:08.821 "nvme_admin": false, 00:10:08.821 "nvme_io": false, 00:10:08.821 "nvme_io_md": false, 00:10:08.821 "write_zeroes": true, 00:10:08.821 "zcopy": true, 00:10:08.821 "get_zone_info": false, 00:10:08.821 "zone_management": false, 00:10:08.821 "zone_append": false, 00:10:08.821 "compare": false, 00:10:08.821 "compare_and_write": false, 00:10:08.821 "abort": true, 00:10:08.821 "seek_hole": false, 00:10:08.821 "seek_data": false, 00:10:08.821 "copy": true, 00:10:08.821 "nvme_iov_md": false 00:10:08.821 }, 00:10:08.821 "memory_domains": [ 00:10:08.821 { 00:10:08.821 "dma_device_id": "system", 00:10:08.821 "dma_device_type": 1 00:10:08.821 }, 00:10:08.821 { 00:10:08.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.821 "dma_device_type": 2 00:10:08.821 } 00:10:08.821 ], 00:10:08.821 "driver_specific": {} 00:10:08.821 } 00:10:08.821 ] 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.821 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.822 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.822 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.822 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.822 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.822 "name": "Existed_Raid", 00:10:08.822 "uuid": "03b387f7-3612-468f-afa6-085836b98135", 00:10:08.822 "strip_size_kb": 64, 00:10:08.822 "state": "configuring", 00:10:08.822 "raid_level": "concat", 00:10:08.822 "superblock": true, 00:10:08.822 "num_base_bdevs": 4, 00:10:08.822 "num_base_bdevs_discovered": 1, 00:10:08.822 "num_base_bdevs_operational": 4, 00:10:08.822 "base_bdevs_list": [ 00:10:08.822 { 00:10:08.822 "name": "BaseBdev1", 00:10:08.822 "uuid": "7d42f489-0f80-4066-96a6-a4d1e5dc2543", 00:10:08.822 "is_configured": true, 00:10:08.822 "data_offset": 2048, 00:10:08.822 "data_size": 63488 00:10:08.822 }, 00:10:08.822 { 00:10:08.822 "name": "BaseBdev2", 00:10:08.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.822 "is_configured": false, 00:10:08.822 "data_offset": 0, 00:10:08.822 "data_size": 0 00:10:08.822 }, 00:10:08.822 { 00:10:08.822 "name": "BaseBdev3", 00:10:08.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.822 "is_configured": false, 00:10:08.822 "data_offset": 0, 00:10:08.822 "data_size": 0 00:10:08.822 }, 00:10:08.822 { 00:10:08.822 "name": "BaseBdev4", 00:10:08.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.822 "is_configured": false, 00:10:08.822 "data_offset": 0, 00:10:08.822 "data_size": 0 00:10:08.822 } 00:10:08.822 ] 00:10:08.822 }' 00:10:08.822 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.822 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.082 [2024-11-26 12:53:26.699833] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.082 [2024-11-26 12:53:26.699949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.082 [2024-11-26 12:53:26.711858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.082 [2024-11-26 12:53:26.713697] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.082 [2024-11-26 12:53:26.713783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.082 [2024-11-26 12:53:26.713809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.082 [2024-11-26 12:53:26.713829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.082 [2024-11-26 12:53:26.713847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:09.082 [2024-11-26 12:53:26.713866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.082 "name": "Existed_Raid", 00:10:09.082 "uuid": "dc0c16ae-039a-4669-acfc-1db93848d8b6", 00:10:09.082 "strip_size_kb": 64, 00:10:09.082 "state": "configuring", 00:10:09.082 "raid_level": "concat", 00:10:09.082 "superblock": true, 00:10:09.082 "num_base_bdevs": 4, 00:10:09.082 "num_base_bdevs_discovered": 1, 00:10:09.082 "num_base_bdevs_operational": 4, 00:10:09.082 "base_bdevs_list": [ 00:10:09.082 { 00:10:09.082 "name": "BaseBdev1", 00:10:09.082 "uuid": "7d42f489-0f80-4066-96a6-a4d1e5dc2543", 00:10:09.082 "is_configured": true, 00:10:09.082 "data_offset": 2048, 00:10:09.082 "data_size": 63488 00:10:09.082 }, 00:10:09.082 { 00:10:09.082 "name": "BaseBdev2", 00:10:09.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.082 "is_configured": false, 00:10:09.082 "data_offset": 0, 00:10:09.082 "data_size": 0 00:10:09.082 }, 00:10:09.082 { 00:10:09.082 "name": "BaseBdev3", 00:10:09.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.082 "is_configured": false, 00:10:09.082 "data_offset": 0, 00:10:09.082 "data_size": 0 00:10:09.082 }, 00:10:09.082 { 00:10:09.082 "name": "BaseBdev4", 00:10:09.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.082 "is_configured": false, 00:10:09.082 "data_offset": 0, 00:10:09.082 "data_size": 0 00:10:09.082 } 00:10:09.082 ] 00:10:09.082 }' 00:10:09.082 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.342 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.602 [2024-11-26 12:53:27.173788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.602 BaseBdev2 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.602 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.602 [ 00:10:09.602 { 00:10:09.602 "name": "BaseBdev2", 00:10:09.602 "aliases": [ 00:10:09.602 "6fa25f3e-3b9f-463e-8958-876f902dee60" 00:10:09.602 ], 00:10:09.602 "product_name": "Malloc disk", 00:10:09.602 "block_size": 512, 00:10:09.602 "num_blocks": 65536, 00:10:09.602 "uuid": "6fa25f3e-3b9f-463e-8958-876f902dee60", 00:10:09.602 "assigned_rate_limits": { 00:10:09.602 "rw_ios_per_sec": 0, 00:10:09.602 "rw_mbytes_per_sec": 0, 00:10:09.602 "r_mbytes_per_sec": 0, 00:10:09.602 "w_mbytes_per_sec": 0 00:10:09.602 }, 00:10:09.602 "claimed": true, 00:10:09.602 "claim_type": "exclusive_write", 00:10:09.602 "zoned": false, 00:10:09.602 "supported_io_types": { 00:10:09.602 "read": true, 00:10:09.602 "write": true, 00:10:09.602 "unmap": true, 00:10:09.602 "flush": true, 00:10:09.603 "reset": true, 00:10:09.603 "nvme_admin": false, 00:10:09.603 "nvme_io": false, 00:10:09.603 "nvme_io_md": false, 00:10:09.603 "write_zeroes": true, 00:10:09.603 "zcopy": true, 00:10:09.603 "get_zone_info": false, 00:10:09.603 "zone_management": false, 00:10:09.603 "zone_append": false, 00:10:09.603 "compare": false, 00:10:09.603 "compare_and_write": false, 00:10:09.603 "abort": true, 00:10:09.603 "seek_hole": false, 00:10:09.603 "seek_data": false, 00:10:09.603 "copy": true, 00:10:09.603 "nvme_iov_md": false 00:10:09.603 }, 00:10:09.603 "memory_domains": [ 00:10:09.603 { 00:10:09.603 "dma_device_id": "system", 00:10:09.603 "dma_device_type": 1 00:10:09.603 }, 00:10:09.603 { 00:10:09.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.603 "dma_device_type": 2 00:10:09.603 } 00:10:09.603 ], 00:10:09.603 "driver_specific": {} 00:10:09.603 } 00:10:09.603 ] 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.603 "name": "Existed_Raid", 00:10:09.603 "uuid": "dc0c16ae-039a-4669-acfc-1db93848d8b6", 00:10:09.603 "strip_size_kb": 64, 00:10:09.603 "state": "configuring", 00:10:09.603 "raid_level": "concat", 00:10:09.603 "superblock": true, 00:10:09.603 "num_base_bdevs": 4, 00:10:09.603 "num_base_bdevs_discovered": 2, 00:10:09.603 "num_base_bdevs_operational": 4, 00:10:09.603 "base_bdevs_list": [ 00:10:09.603 { 00:10:09.603 "name": "BaseBdev1", 00:10:09.603 "uuid": "7d42f489-0f80-4066-96a6-a4d1e5dc2543", 00:10:09.603 "is_configured": true, 00:10:09.603 "data_offset": 2048, 00:10:09.603 "data_size": 63488 00:10:09.603 }, 00:10:09.603 { 00:10:09.603 "name": "BaseBdev2", 00:10:09.603 "uuid": "6fa25f3e-3b9f-463e-8958-876f902dee60", 00:10:09.603 "is_configured": true, 00:10:09.603 "data_offset": 2048, 00:10:09.603 "data_size": 63488 00:10:09.603 }, 00:10:09.603 { 00:10:09.603 "name": "BaseBdev3", 00:10:09.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.603 "is_configured": false, 00:10:09.603 "data_offset": 0, 00:10:09.603 "data_size": 0 00:10:09.603 }, 00:10:09.603 { 00:10:09.603 "name": "BaseBdev4", 00:10:09.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.603 "is_configured": false, 00:10:09.603 "data_offset": 0, 00:10:09.603 "data_size": 0 00:10:09.603 } 00:10:09.603 ] 00:10:09.603 }' 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.603 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.173 [2024-11-26 12:53:27.627945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.173 BaseBdev3 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.173 [ 00:10:10.173 { 00:10:10.173 "name": "BaseBdev3", 00:10:10.173 "aliases": [ 00:10:10.173 "e0e1eeaa-f298-40b3-9f9a-764bccf4672f" 00:10:10.173 ], 00:10:10.173 "product_name": "Malloc disk", 00:10:10.173 "block_size": 512, 00:10:10.173 "num_blocks": 65536, 00:10:10.173 "uuid": "e0e1eeaa-f298-40b3-9f9a-764bccf4672f", 00:10:10.173 "assigned_rate_limits": { 00:10:10.173 "rw_ios_per_sec": 0, 00:10:10.173 "rw_mbytes_per_sec": 0, 00:10:10.173 "r_mbytes_per_sec": 0, 00:10:10.173 "w_mbytes_per_sec": 0 00:10:10.173 }, 00:10:10.173 "claimed": true, 00:10:10.173 "claim_type": "exclusive_write", 00:10:10.173 "zoned": false, 00:10:10.173 "supported_io_types": { 00:10:10.173 "read": true, 00:10:10.173 "write": true, 00:10:10.173 "unmap": true, 00:10:10.173 "flush": true, 00:10:10.173 "reset": true, 00:10:10.173 "nvme_admin": false, 00:10:10.173 "nvme_io": false, 00:10:10.173 "nvme_io_md": false, 00:10:10.173 "write_zeroes": true, 00:10:10.173 "zcopy": true, 00:10:10.173 "get_zone_info": false, 00:10:10.173 "zone_management": false, 00:10:10.173 "zone_append": false, 00:10:10.173 "compare": false, 00:10:10.173 "compare_and_write": false, 00:10:10.173 "abort": true, 00:10:10.173 "seek_hole": false, 00:10:10.173 "seek_data": false, 00:10:10.173 "copy": true, 00:10:10.173 "nvme_iov_md": false 00:10:10.173 }, 00:10:10.173 "memory_domains": [ 00:10:10.173 { 00:10:10.173 "dma_device_id": "system", 00:10:10.173 "dma_device_type": 1 00:10:10.173 }, 00:10:10.173 { 00:10:10.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.173 "dma_device_type": 2 00:10:10.173 } 00:10:10.173 ], 00:10:10.173 "driver_specific": {} 00:10:10.173 } 00:10:10.173 ] 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.173 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.174 "name": "Existed_Raid", 00:10:10.174 "uuid": "dc0c16ae-039a-4669-acfc-1db93848d8b6", 00:10:10.174 "strip_size_kb": 64, 00:10:10.174 "state": "configuring", 00:10:10.174 "raid_level": "concat", 00:10:10.174 "superblock": true, 00:10:10.174 "num_base_bdevs": 4, 00:10:10.174 "num_base_bdevs_discovered": 3, 00:10:10.174 "num_base_bdevs_operational": 4, 00:10:10.174 "base_bdevs_list": [ 00:10:10.174 { 00:10:10.174 "name": "BaseBdev1", 00:10:10.174 "uuid": "7d42f489-0f80-4066-96a6-a4d1e5dc2543", 00:10:10.174 "is_configured": true, 00:10:10.174 "data_offset": 2048, 00:10:10.174 "data_size": 63488 00:10:10.174 }, 00:10:10.174 { 00:10:10.174 "name": "BaseBdev2", 00:10:10.174 "uuid": "6fa25f3e-3b9f-463e-8958-876f902dee60", 00:10:10.174 "is_configured": true, 00:10:10.174 "data_offset": 2048, 00:10:10.174 "data_size": 63488 00:10:10.174 }, 00:10:10.174 { 00:10:10.174 "name": "BaseBdev3", 00:10:10.174 "uuid": "e0e1eeaa-f298-40b3-9f9a-764bccf4672f", 00:10:10.174 "is_configured": true, 00:10:10.174 "data_offset": 2048, 00:10:10.174 "data_size": 63488 00:10:10.174 }, 00:10:10.174 { 00:10:10.174 "name": "BaseBdev4", 00:10:10.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.174 "is_configured": false, 00:10:10.174 "data_offset": 0, 00:10:10.174 "data_size": 0 00:10:10.174 } 00:10:10.174 ] 00:10:10.174 }' 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.174 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.432 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:10.432 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.432 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.691 [2024-11-26 12:53:28.118314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:10.691 [2024-11-26 12:53:28.118620] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:10.691 [2024-11-26 12:53:28.118670] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:10.691 BaseBdev4 00:10:10.691 [2024-11-26 12:53:28.118962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:10.691 [2024-11-26 12:53:28.119087] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:10.691 [2024-11-26 12:53:28.119112] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:10.691 [2024-11-26 12:53:28.119255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.691 [ 00:10:10.691 { 00:10:10.691 "name": "BaseBdev4", 00:10:10.691 "aliases": [ 00:10:10.691 "1326d7c3-7729-413f-8270-4a6c3d857472" 00:10:10.691 ], 00:10:10.691 "product_name": "Malloc disk", 00:10:10.691 "block_size": 512, 00:10:10.691 "num_blocks": 65536, 00:10:10.691 "uuid": "1326d7c3-7729-413f-8270-4a6c3d857472", 00:10:10.691 "assigned_rate_limits": { 00:10:10.691 "rw_ios_per_sec": 0, 00:10:10.691 "rw_mbytes_per_sec": 0, 00:10:10.691 "r_mbytes_per_sec": 0, 00:10:10.691 "w_mbytes_per_sec": 0 00:10:10.691 }, 00:10:10.691 "claimed": true, 00:10:10.691 "claim_type": "exclusive_write", 00:10:10.691 "zoned": false, 00:10:10.691 "supported_io_types": { 00:10:10.691 "read": true, 00:10:10.691 "write": true, 00:10:10.691 "unmap": true, 00:10:10.691 "flush": true, 00:10:10.691 "reset": true, 00:10:10.691 "nvme_admin": false, 00:10:10.691 "nvme_io": false, 00:10:10.691 "nvme_io_md": false, 00:10:10.691 "write_zeroes": true, 00:10:10.691 "zcopy": true, 00:10:10.691 "get_zone_info": false, 00:10:10.691 "zone_management": false, 00:10:10.691 "zone_append": false, 00:10:10.691 "compare": false, 00:10:10.691 "compare_and_write": false, 00:10:10.691 "abort": true, 00:10:10.691 "seek_hole": false, 00:10:10.691 "seek_data": false, 00:10:10.691 "copy": true, 00:10:10.691 "nvme_iov_md": false 00:10:10.691 }, 00:10:10.691 "memory_domains": [ 00:10:10.691 { 00:10:10.691 "dma_device_id": "system", 00:10:10.691 "dma_device_type": 1 00:10:10.691 }, 00:10:10.691 { 00:10:10.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.691 "dma_device_type": 2 00:10:10.691 } 00:10:10.691 ], 00:10:10.691 "driver_specific": {} 00:10:10.691 } 00:10:10.691 ] 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.691 "name": "Existed_Raid", 00:10:10.691 "uuid": "dc0c16ae-039a-4669-acfc-1db93848d8b6", 00:10:10.691 "strip_size_kb": 64, 00:10:10.691 "state": "online", 00:10:10.691 "raid_level": "concat", 00:10:10.691 "superblock": true, 00:10:10.691 "num_base_bdevs": 4, 00:10:10.691 "num_base_bdevs_discovered": 4, 00:10:10.691 "num_base_bdevs_operational": 4, 00:10:10.691 "base_bdevs_list": [ 00:10:10.691 { 00:10:10.691 "name": "BaseBdev1", 00:10:10.691 "uuid": "7d42f489-0f80-4066-96a6-a4d1e5dc2543", 00:10:10.691 "is_configured": true, 00:10:10.691 "data_offset": 2048, 00:10:10.691 "data_size": 63488 00:10:10.691 }, 00:10:10.691 { 00:10:10.691 "name": "BaseBdev2", 00:10:10.691 "uuid": "6fa25f3e-3b9f-463e-8958-876f902dee60", 00:10:10.691 "is_configured": true, 00:10:10.691 "data_offset": 2048, 00:10:10.691 "data_size": 63488 00:10:10.691 }, 00:10:10.691 { 00:10:10.691 "name": "BaseBdev3", 00:10:10.691 "uuid": "e0e1eeaa-f298-40b3-9f9a-764bccf4672f", 00:10:10.691 "is_configured": true, 00:10:10.691 "data_offset": 2048, 00:10:10.691 "data_size": 63488 00:10:10.691 }, 00:10:10.691 { 00:10:10.691 "name": "BaseBdev4", 00:10:10.691 "uuid": "1326d7c3-7729-413f-8270-4a6c3d857472", 00:10:10.691 "is_configured": true, 00:10:10.691 "data_offset": 2048, 00:10:10.691 "data_size": 63488 00:10:10.691 } 00:10:10.691 ] 00:10:10.691 }' 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.691 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.950 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.950 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.950 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.950 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.950 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.950 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.950 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.950 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.950 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.950 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.950 [2024-11-26 12:53:28.621785] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.211 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.211 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.211 "name": "Existed_Raid", 00:10:11.211 "aliases": [ 00:10:11.211 "dc0c16ae-039a-4669-acfc-1db93848d8b6" 00:10:11.211 ], 00:10:11.211 "product_name": "Raid Volume", 00:10:11.211 "block_size": 512, 00:10:11.211 "num_blocks": 253952, 00:10:11.211 "uuid": "dc0c16ae-039a-4669-acfc-1db93848d8b6", 00:10:11.211 "assigned_rate_limits": { 00:10:11.211 "rw_ios_per_sec": 0, 00:10:11.211 "rw_mbytes_per_sec": 0, 00:10:11.211 "r_mbytes_per_sec": 0, 00:10:11.211 "w_mbytes_per_sec": 0 00:10:11.211 }, 00:10:11.211 "claimed": false, 00:10:11.211 "zoned": false, 00:10:11.211 "supported_io_types": { 00:10:11.211 "read": true, 00:10:11.211 "write": true, 00:10:11.211 "unmap": true, 00:10:11.211 "flush": true, 00:10:11.211 "reset": true, 00:10:11.211 "nvme_admin": false, 00:10:11.211 "nvme_io": false, 00:10:11.211 "nvme_io_md": false, 00:10:11.211 "write_zeroes": true, 00:10:11.211 "zcopy": false, 00:10:11.211 "get_zone_info": false, 00:10:11.211 "zone_management": false, 00:10:11.211 "zone_append": false, 00:10:11.211 "compare": false, 00:10:11.211 "compare_and_write": false, 00:10:11.211 "abort": false, 00:10:11.211 "seek_hole": false, 00:10:11.211 "seek_data": false, 00:10:11.211 "copy": false, 00:10:11.211 "nvme_iov_md": false 00:10:11.211 }, 00:10:11.211 "memory_domains": [ 00:10:11.211 { 00:10:11.211 "dma_device_id": "system", 00:10:11.211 "dma_device_type": 1 00:10:11.211 }, 00:10:11.211 { 00:10:11.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.211 "dma_device_type": 2 00:10:11.211 }, 00:10:11.211 { 00:10:11.211 "dma_device_id": "system", 00:10:11.211 "dma_device_type": 1 00:10:11.211 }, 00:10:11.211 { 00:10:11.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.211 "dma_device_type": 2 00:10:11.211 }, 00:10:11.211 { 00:10:11.211 "dma_device_id": "system", 00:10:11.211 "dma_device_type": 1 00:10:11.211 }, 00:10:11.211 { 00:10:11.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.211 "dma_device_type": 2 00:10:11.211 }, 00:10:11.211 { 00:10:11.211 "dma_device_id": "system", 00:10:11.211 "dma_device_type": 1 00:10:11.211 }, 00:10:11.211 { 00:10:11.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.211 "dma_device_type": 2 00:10:11.211 } 00:10:11.211 ], 00:10:11.211 "driver_specific": { 00:10:11.211 "raid": { 00:10:11.211 "uuid": "dc0c16ae-039a-4669-acfc-1db93848d8b6", 00:10:11.211 "strip_size_kb": 64, 00:10:11.211 "state": "online", 00:10:11.211 "raid_level": "concat", 00:10:11.211 "superblock": true, 00:10:11.211 "num_base_bdevs": 4, 00:10:11.211 "num_base_bdevs_discovered": 4, 00:10:11.211 "num_base_bdevs_operational": 4, 00:10:11.211 "base_bdevs_list": [ 00:10:11.211 { 00:10:11.211 "name": "BaseBdev1", 00:10:11.211 "uuid": "7d42f489-0f80-4066-96a6-a4d1e5dc2543", 00:10:11.211 "is_configured": true, 00:10:11.211 "data_offset": 2048, 00:10:11.211 "data_size": 63488 00:10:11.211 }, 00:10:11.211 { 00:10:11.211 "name": "BaseBdev2", 00:10:11.211 "uuid": "6fa25f3e-3b9f-463e-8958-876f902dee60", 00:10:11.211 "is_configured": true, 00:10:11.211 "data_offset": 2048, 00:10:11.211 "data_size": 63488 00:10:11.211 }, 00:10:11.211 { 00:10:11.211 "name": "BaseBdev3", 00:10:11.211 "uuid": "e0e1eeaa-f298-40b3-9f9a-764bccf4672f", 00:10:11.211 "is_configured": true, 00:10:11.211 "data_offset": 2048, 00:10:11.211 "data_size": 63488 00:10:11.211 }, 00:10:11.211 { 00:10:11.211 "name": "BaseBdev4", 00:10:11.211 "uuid": "1326d7c3-7729-413f-8270-4a6c3d857472", 00:10:11.211 "is_configured": true, 00:10:11.211 "data_offset": 2048, 00:10:11.211 "data_size": 63488 00:10:11.211 } 00:10:11.211 ] 00:10:11.211 } 00:10:11.211 } 00:10:11.211 }' 00:10:11.211 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.211 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:11.211 BaseBdev2 00:10:11.211 BaseBdev3 00:10:11.211 BaseBdev4' 00:10:11.211 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.212 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.472 [2024-11-26 12:53:28.917005] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.472 [2024-11-26 12:53:28.917042] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.472 [2024-11-26 12:53:28.917097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.472 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.473 "name": "Existed_Raid", 00:10:11.473 "uuid": "dc0c16ae-039a-4669-acfc-1db93848d8b6", 00:10:11.473 "strip_size_kb": 64, 00:10:11.473 "state": "offline", 00:10:11.473 "raid_level": "concat", 00:10:11.473 "superblock": true, 00:10:11.473 "num_base_bdevs": 4, 00:10:11.473 "num_base_bdevs_discovered": 3, 00:10:11.473 "num_base_bdevs_operational": 3, 00:10:11.473 "base_bdevs_list": [ 00:10:11.473 { 00:10:11.473 "name": null, 00:10:11.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.473 "is_configured": false, 00:10:11.473 "data_offset": 0, 00:10:11.473 "data_size": 63488 00:10:11.473 }, 00:10:11.473 { 00:10:11.473 "name": "BaseBdev2", 00:10:11.473 "uuid": "6fa25f3e-3b9f-463e-8958-876f902dee60", 00:10:11.473 "is_configured": true, 00:10:11.473 "data_offset": 2048, 00:10:11.473 "data_size": 63488 00:10:11.473 }, 00:10:11.473 { 00:10:11.473 "name": "BaseBdev3", 00:10:11.473 "uuid": "e0e1eeaa-f298-40b3-9f9a-764bccf4672f", 00:10:11.473 "is_configured": true, 00:10:11.473 "data_offset": 2048, 00:10:11.473 "data_size": 63488 00:10:11.473 }, 00:10:11.473 { 00:10:11.473 "name": "BaseBdev4", 00:10:11.473 "uuid": "1326d7c3-7729-413f-8270-4a6c3d857472", 00:10:11.473 "is_configured": true, 00:10:11.473 "data_offset": 2048, 00:10:11.473 "data_size": 63488 00:10:11.473 } 00:10:11.473 ] 00:10:11.473 }' 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.473 12:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.732 [2024-11-26 12:53:29.351595] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.732 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.993 [2024-11-26 12:53:29.422644] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.993 [2024-11-26 12:53:29.473701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:11.993 [2024-11-26 12:53:29.473810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.993 BaseBdev2 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.993 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.993 [ 00:10:11.993 { 00:10:11.993 "name": "BaseBdev2", 00:10:11.993 "aliases": [ 00:10:11.993 "0ef29bcd-b599-4d15-8100-0ce05fb9bdde" 00:10:11.993 ], 00:10:11.994 "product_name": "Malloc disk", 00:10:11.994 "block_size": 512, 00:10:11.994 "num_blocks": 65536, 00:10:11.994 "uuid": "0ef29bcd-b599-4d15-8100-0ce05fb9bdde", 00:10:11.994 "assigned_rate_limits": { 00:10:11.994 "rw_ios_per_sec": 0, 00:10:11.994 "rw_mbytes_per_sec": 0, 00:10:11.994 "r_mbytes_per_sec": 0, 00:10:11.994 "w_mbytes_per_sec": 0 00:10:11.994 }, 00:10:11.994 "claimed": false, 00:10:11.994 "zoned": false, 00:10:11.994 "supported_io_types": { 00:10:11.994 "read": true, 00:10:11.994 "write": true, 00:10:11.994 "unmap": true, 00:10:11.994 "flush": true, 00:10:11.994 "reset": true, 00:10:11.994 "nvme_admin": false, 00:10:11.994 "nvme_io": false, 00:10:11.994 "nvme_io_md": false, 00:10:11.994 "write_zeroes": true, 00:10:11.994 "zcopy": true, 00:10:11.994 "get_zone_info": false, 00:10:11.994 "zone_management": false, 00:10:11.994 "zone_append": false, 00:10:11.994 "compare": false, 00:10:11.994 "compare_and_write": false, 00:10:11.994 "abort": true, 00:10:11.994 "seek_hole": false, 00:10:11.994 "seek_data": false, 00:10:11.994 "copy": true, 00:10:11.994 "nvme_iov_md": false 00:10:11.994 }, 00:10:11.994 "memory_domains": [ 00:10:11.994 { 00:10:11.994 "dma_device_id": "system", 00:10:11.994 "dma_device_type": 1 00:10:11.994 }, 00:10:11.994 { 00:10:11.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.994 "dma_device_type": 2 00:10:11.994 } 00:10:11.994 ], 00:10:11.994 "driver_specific": {} 00:10:11.994 } 00:10:11.994 ] 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.994 BaseBdev3 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.994 [ 00:10:11.994 { 00:10:11.994 "name": "BaseBdev3", 00:10:11.994 "aliases": [ 00:10:11.994 "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87" 00:10:11.994 ], 00:10:11.994 "product_name": "Malloc disk", 00:10:11.994 "block_size": 512, 00:10:11.994 "num_blocks": 65536, 00:10:11.994 "uuid": "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87", 00:10:11.994 "assigned_rate_limits": { 00:10:11.994 "rw_ios_per_sec": 0, 00:10:11.994 "rw_mbytes_per_sec": 0, 00:10:11.994 "r_mbytes_per_sec": 0, 00:10:11.994 "w_mbytes_per_sec": 0 00:10:11.994 }, 00:10:11.994 "claimed": false, 00:10:11.994 "zoned": false, 00:10:11.994 "supported_io_types": { 00:10:11.994 "read": true, 00:10:11.994 "write": true, 00:10:11.994 "unmap": true, 00:10:11.994 "flush": true, 00:10:11.994 "reset": true, 00:10:11.994 "nvme_admin": false, 00:10:11.994 "nvme_io": false, 00:10:11.994 "nvme_io_md": false, 00:10:11.994 "write_zeroes": true, 00:10:11.994 "zcopy": true, 00:10:11.994 "get_zone_info": false, 00:10:11.994 "zone_management": false, 00:10:11.994 "zone_append": false, 00:10:11.994 "compare": false, 00:10:11.994 "compare_and_write": false, 00:10:11.994 "abort": true, 00:10:11.994 "seek_hole": false, 00:10:11.994 "seek_data": false, 00:10:11.994 "copy": true, 00:10:11.994 "nvme_iov_md": false 00:10:11.994 }, 00:10:11.994 "memory_domains": [ 00:10:11.994 { 00:10:11.994 "dma_device_id": "system", 00:10:11.994 "dma_device_type": 1 00:10:11.994 }, 00:10:11.994 { 00:10:11.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.994 "dma_device_type": 2 00:10:11.994 } 00:10:11.994 ], 00:10:11.994 "driver_specific": {} 00:10:11.994 } 00:10:11.994 ] 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:11.994 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.995 BaseBdev4 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.995 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.256 [ 00:10:12.256 { 00:10:12.256 "name": "BaseBdev4", 00:10:12.256 "aliases": [ 00:10:12.256 "73d220b2-c717-4cdd-a6bd-04295afb0568" 00:10:12.256 ], 00:10:12.256 "product_name": "Malloc disk", 00:10:12.256 "block_size": 512, 00:10:12.256 "num_blocks": 65536, 00:10:12.256 "uuid": "73d220b2-c717-4cdd-a6bd-04295afb0568", 00:10:12.256 "assigned_rate_limits": { 00:10:12.256 "rw_ios_per_sec": 0, 00:10:12.256 "rw_mbytes_per_sec": 0, 00:10:12.256 "r_mbytes_per_sec": 0, 00:10:12.256 "w_mbytes_per_sec": 0 00:10:12.256 }, 00:10:12.256 "claimed": false, 00:10:12.256 "zoned": false, 00:10:12.256 "supported_io_types": { 00:10:12.256 "read": true, 00:10:12.256 "write": true, 00:10:12.256 "unmap": true, 00:10:12.256 "flush": true, 00:10:12.256 "reset": true, 00:10:12.256 "nvme_admin": false, 00:10:12.256 "nvme_io": false, 00:10:12.256 "nvme_io_md": false, 00:10:12.256 "write_zeroes": true, 00:10:12.256 "zcopy": true, 00:10:12.256 "get_zone_info": false, 00:10:12.256 "zone_management": false, 00:10:12.256 "zone_append": false, 00:10:12.256 "compare": false, 00:10:12.256 "compare_and_write": false, 00:10:12.256 "abort": true, 00:10:12.256 "seek_hole": false, 00:10:12.256 "seek_data": false, 00:10:12.256 "copy": true, 00:10:12.256 "nvme_iov_md": false 00:10:12.256 }, 00:10:12.256 "memory_domains": [ 00:10:12.256 { 00:10:12.256 "dma_device_id": "system", 00:10:12.256 "dma_device_type": 1 00:10:12.256 }, 00:10:12.256 { 00:10:12.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.256 "dma_device_type": 2 00:10:12.256 } 00:10:12.256 ], 00:10:12.256 "driver_specific": {} 00:10:12.256 } 00:10:12.256 ] 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.256 [2024-11-26 12:53:29.700508] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.256 [2024-11-26 12:53:29.700628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.256 [2024-11-26 12:53:29.700668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.256 [2024-11-26 12:53:29.702436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.256 [2024-11-26 12:53:29.702537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.256 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.257 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.257 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.257 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.257 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.257 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.257 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.257 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.257 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.257 "name": "Existed_Raid", 00:10:12.257 "uuid": "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6", 00:10:12.257 "strip_size_kb": 64, 00:10:12.257 "state": "configuring", 00:10:12.257 "raid_level": "concat", 00:10:12.257 "superblock": true, 00:10:12.257 "num_base_bdevs": 4, 00:10:12.257 "num_base_bdevs_discovered": 3, 00:10:12.257 "num_base_bdevs_operational": 4, 00:10:12.257 "base_bdevs_list": [ 00:10:12.257 { 00:10:12.257 "name": "BaseBdev1", 00:10:12.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.257 "is_configured": false, 00:10:12.257 "data_offset": 0, 00:10:12.257 "data_size": 0 00:10:12.257 }, 00:10:12.257 { 00:10:12.257 "name": "BaseBdev2", 00:10:12.257 "uuid": "0ef29bcd-b599-4d15-8100-0ce05fb9bdde", 00:10:12.257 "is_configured": true, 00:10:12.257 "data_offset": 2048, 00:10:12.257 "data_size": 63488 00:10:12.257 }, 00:10:12.257 { 00:10:12.257 "name": "BaseBdev3", 00:10:12.257 "uuid": "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87", 00:10:12.257 "is_configured": true, 00:10:12.257 "data_offset": 2048, 00:10:12.257 "data_size": 63488 00:10:12.257 }, 00:10:12.257 { 00:10:12.257 "name": "BaseBdev4", 00:10:12.257 "uuid": "73d220b2-c717-4cdd-a6bd-04295afb0568", 00:10:12.257 "is_configured": true, 00:10:12.257 "data_offset": 2048, 00:10:12.257 "data_size": 63488 00:10:12.257 } 00:10:12.257 ] 00:10:12.257 }' 00:10:12.257 12:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.257 12:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.516 [2024-11-26 12:53:30.135737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.516 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.517 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.517 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.776 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.776 "name": "Existed_Raid", 00:10:12.776 "uuid": "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6", 00:10:12.776 "strip_size_kb": 64, 00:10:12.776 "state": "configuring", 00:10:12.776 "raid_level": "concat", 00:10:12.776 "superblock": true, 00:10:12.776 "num_base_bdevs": 4, 00:10:12.776 "num_base_bdevs_discovered": 2, 00:10:12.776 "num_base_bdevs_operational": 4, 00:10:12.776 "base_bdevs_list": [ 00:10:12.776 { 00:10:12.776 "name": "BaseBdev1", 00:10:12.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.776 "is_configured": false, 00:10:12.776 "data_offset": 0, 00:10:12.776 "data_size": 0 00:10:12.776 }, 00:10:12.776 { 00:10:12.776 "name": null, 00:10:12.776 "uuid": "0ef29bcd-b599-4d15-8100-0ce05fb9bdde", 00:10:12.776 "is_configured": false, 00:10:12.776 "data_offset": 0, 00:10:12.776 "data_size": 63488 00:10:12.776 }, 00:10:12.776 { 00:10:12.776 "name": "BaseBdev3", 00:10:12.776 "uuid": "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87", 00:10:12.776 "is_configured": true, 00:10:12.776 "data_offset": 2048, 00:10:12.776 "data_size": 63488 00:10:12.776 }, 00:10:12.776 { 00:10:12.776 "name": "BaseBdev4", 00:10:12.776 "uuid": "73d220b2-c717-4cdd-a6bd-04295afb0568", 00:10:12.776 "is_configured": true, 00:10:12.776 "data_offset": 2048, 00:10:12.776 "data_size": 63488 00:10:12.776 } 00:10:12.776 ] 00:10:12.776 }' 00:10:12.776 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.776 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.035 [2024-11-26 12:53:30.629955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.035 BaseBdev1 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.035 [ 00:10:13.035 { 00:10:13.035 "name": "BaseBdev1", 00:10:13.035 "aliases": [ 00:10:13.035 "37c279f9-cecf-493f-b6ec-35f6e2cf50f3" 00:10:13.035 ], 00:10:13.035 "product_name": "Malloc disk", 00:10:13.035 "block_size": 512, 00:10:13.035 "num_blocks": 65536, 00:10:13.035 "uuid": "37c279f9-cecf-493f-b6ec-35f6e2cf50f3", 00:10:13.035 "assigned_rate_limits": { 00:10:13.035 "rw_ios_per_sec": 0, 00:10:13.035 "rw_mbytes_per_sec": 0, 00:10:13.035 "r_mbytes_per_sec": 0, 00:10:13.035 "w_mbytes_per_sec": 0 00:10:13.035 }, 00:10:13.035 "claimed": true, 00:10:13.035 "claim_type": "exclusive_write", 00:10:13.035 "zoned": false, 00:10:13.035 "supported_io_types": { 00:10:13.035 "read": true, 00:10:13.035 "write": true, 00:10:13.035 "unmap": true, 00:10:13.035 "flush": true, 00:10:13.035 "reset": true, 00:10:13.035 "nvme_admin": false, 00:10:13.035 "nvme_io": false, 00:10:13.035 "nvme_io_md": false, 00:10:13.035 "write_zeroes": true, 00:10:13.035 "zcopy": true, 00:10:13.035 "get_zone_info": false, 00:10:13.035 "zone_management": false, 00:10:13.035 "zone_append": false, 00:10:13.035 "compare": false, 00:10:13.035 "compare_and_write": false, 00:10:13.035 "abort": true, 00:10:13.035 "seek_hole": false, 00:10:13.035 "seek_data": false, 00:10:13.035 "copy": true, 00:10:13.035 "nvme_iov_md": false 00:10:13.035 }, 00:10:13.035 "memory_domains": [ 00:10:13.035 { 00:10:13.035 "dma_device_id": "system", 00:10:13.035 "dma_device_type": 1 00:10:13.035 }, 00:10:13.035 { 00:10:13.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.035 "dma_device_type": 2 00:10:13.035 } 00:10:13.035 ], 00:10:13.035 "driver_specific": {} 00:10:13.035 } 00:10:13.035 ] 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.035 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.036 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.295 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.295 "name": "Existed_Raid", 00:10:13.295 "uuid": "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6", 00:10:13.295 "strip_size_kb": 64, 00:10:13.295 "state": "configuring", 00:10:13.295 "raid_level": "concat", 00:10:13.295 "superblock": true, 00:10:13.295 "num_base_bdevs": 4, 00:10:13.295 "num_base_bdevs_discovered": 3, 00:10:13.295 "num_base_bdevs_operational": 4, 00:10:13.295 "base_bdevs_list": [ 00:10:13.295 { 00:10:13.295 "name": "BaseBdev1", 00:10:13.295 "uuid": "37c279f9-cecf-493f-b6ec-35f6e2cf50f3", 00:10:13.295 "is_configured": true, 00:10:13.295 "data_offset": 2048, 00:10:13.295 "data_size": 63488 00:10:13.295 }, 00:10:13.295 { 00:10:13.295 "name": null, 00:10:13.295 "uuid": "0ef29bcd-b599-4d15-8100-0ce05fb9bdde", 00:10:13.295 "is_configured": false, 00:10:13.295 "data_offset": 0, 00:10:13.295 "data_size": 63488 00:10:13.295 }, 00:10:13.295 { 00:10:13.295 "name": "BaseBdev3", 00:10:13.295 "uuid": "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87", 00:10:13.295 "is_configured": true, 00:10:13.295 "data_offset": 2048, 00:10:13.295 "data_size": 63488 00:10:13.295 }, 00:10:13.295 { 00:10:13.295 "name": "BaseBdev4", 00:10:13.295 "uuid": "73d220b2-c717-4cdd-a6bd-04295afb0568", 00:10:13.295 "is_configured": true, 00:10:13.295 "data_offset": 2048, 00:10:13.295 "data_size": 63488 00:10:13.295 } 00:10:13.295 ] 00:10:13.295 }' 00:10:13.295 12:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.295 12:53:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.555 [2024-11-26 12:53:31.165073] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.555 "name": "Existed_Raid", 00:10:13.555 "uuid": "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6", 00:10:13.555 "strip_size_kb": 64, 00:10:13.555 "state": "configuring", 00:10:13.555 "raid_level": "concat", 00:10:13.555 "superblock": true, 00:10:13.555 "num_base_bdevs": 4, 00:10:13.555 "num_base_bdevs_discovered": 2, 00:10:13.555 "num_base_bdevs_operational": 4, 00:10:13.555 "base_bdevs_list": [ 00:10:13.555 { 00:10:13.555 "name": "BaseBdev1", 00:10:13.555 "uuid": "37c279f9-cecf-493f-b6ec-35f6e2cf50f3", 00:10:13.555 "is_configured": true, 00:10:13.555 "data_offset": 2048, 00:10:13.555 "data_size": 63488 00:10:13.555 }, 00:10:13.555 { 00:10:13.555 "name": null, 00:10:13.555 "uuid": "0ef29bcd-b599-4d15-8100-0ce05fb9bdde", 00:10:13.555 "is_configured": false, 00:10:13.555 "data_offset": 0, 00:10:13.555 "data_size": 63488 00:10:13.555 }, 00:10:13.555 { 00:10:13.555 "name": null, 00:10:13.555 "uuid": "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87", 00:10:13.555 "is_configured": false, 00:10:13.555 "data_offset": 0, 00:10:13.555 "data_size": 63488 00:10:13.555 }, 00:10:13.555 { 00:10:13.555 "name": "BaseBdev4", 00:10:13.555 "uuid": "73d220b2-c717-4cdd-a6bd-04295afb0568", 00:10:13.555 "is_configured": true, 00:10:13.555 "data_offset": 2048, 00:10:13.555 "data_size": 63488 00:10:13.555 } 00:10:13.555 ] 00:10:13.555 }' 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.555 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.124 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.125 [2024-11-26 12:53:31.692234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.125 "name": "Existed_Raid", 00:10:14.125 "uuid": "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6", 00:10:14.125 "strip_size_kb": 64, 00:10:14.125 "state": "configuring", 00:10:14.125 "raid_level": "concat", 00:10:14.125 "superblock": true, 00:10:14.125 "num_base_bdevs": 4, 00:10:14.125 "num_base_bdevs_discovered": 3, 00:10:14.125 "num_base_bdevs_operational": 4, 00:10:14.125 "base_bdevs_list": [ 00:10:14.125 { 00:10:14.125 "name": "BaseBdev1", 00:10:14.125 "uuid": "37c279f9-cecf-493f-b6ec-35f6e2cf50f3", 00:10:14.125 "is_configured": true, 00:10:14.125 "data_offset": 2048, 00:10:14.125 "data_size": 63488 00:10:14.125 }, 00:10:14.125 { 00:10:14.125 "name": null, 00:10:14.125 "uuid": "0ef29bcd-b599-4d15-8100-0ce05fb9bdde", 00:10:14.125 "is_configured": false, 00:10:14.125 "data_offset": 0, 00:10:14.125 "data_size": 63488 00:10:14.125 }, 00:10:14.125 { 00:10:14.125 "name": "BaseBdev3", 00:10:14.125 "uuid": "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87", 00:10:14.125 "is_configured": true, 00:10:14.125 "data_offset": 2048, 00:10:14.125 "data_size": 63488 00:10:14.125 }, 00:10:14.125 { 00:10:14.125 "name": "BaseBdev4", 00:10:14.125 "uuid": "73d220b2-c717-4cdd-a6bd-04295afb0568", 00:10:14.125 "is_configured": true, 00:10:14.125 "data_offset": 2048, 00:10:14.125 "data_size": 63488 00:10:14.125 } 00:10:14.125 ] 00:10:14.125 }' 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.125 12:53:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 [2024-11-26 12:53:32.199384] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.701 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.701 "name": "Existed_Raid", 00:10:14.701 "uuid": "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6", 00:10:14.701 "strip_size_kb": 64, 00:10:14.701 "state": "configuring", 00:10:14.701 "raid_level": "concat", 00:10:14.701 "superblock": true, 00:10:14.701 "num_base_bdevs": 4, 00:10:14.701 "num_base_bdevs_discovered": 2, 00:10:14.701 "num_base_bdevs_operational": 4, 00:10:14.701 "base_bdevs_list": [ 00:10:14.701 { 00:10:14.701 "name": null, 00:10:14.701 "uuid": "37c279f9-cecf-493f-b6ec-35f6e2cf50f3", 00:10:14.701 "is_configured": false, 00:10:14.701 "data_offset": 0, 00:10:14.701 "data_size": 63488 00:10:14.701 }, 00:10:14.701 { 00:10:14.701 "name": null, 00:10:14.701 "uuid": "0ef29bcd-b599-4d15-8100-0ce05fb9bdde", 00:10:14.701 "is_configured": false, 00:10:14.701 "data_offset": 0, 00:10:14.702 "data_size": 63488 00:10:14.702 }, 00:10:14.702 { 00:10:14.702 "name": "BaseBdev3", 00:10:14.702 "uuid": "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87", 00:10:14.702 "is_configured": true, 00:10:14.702 "data_offset": 2048, 00:10:14.702 "data_size": 63488 00:10:14.702 }, 00:10:14.702 { 00:10:14.702 "name": "BaseBdev4", 00:10:14.702 "uuid": "73d220b2-c717-4cdd-a6bd-04295afb0568", 00:10:14.702 "is_configured": true, 00:10:14.702 "data_offset": 2048, 00:10:14.702 "data_size": 63488 00:10:14.702 } 00:10:14.702 ] 00:10:14.702 }' 00:10:14.702 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.702 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.961 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.961 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.961 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.961 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 [2024-11-26 12:53:32.669111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.221 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.221 "name": "Existed_Raid", 00:10:15.221 "uuid": "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6", 00:10:15.221 "strip_size_kb": 64, 00:10:15.221 "state": "configuring", 00:10:15.221 "raid_level": "concat", 00:10:15.221 "superblock": true, 00:10:15.221 "num_base_bdevs": 4, 00:10:15.221 "num_base_bdevs_discovered": 3, 00:10:15.221 "num_base_bdevs_operational": 4, 00:10:15.221 "base_bdevs_list": [ 00:10:15.221 { 00:10:15.221 "name": null, 00:10:15.221 "uuid": "37c279f9-cecf-493f-b6ec-35f6e2cf50f3", 00:10:15.221 "is_configured": false, 00:10:15.221 "data_offset": 0, 00:10:15.221 "data_size": 63488 00:10:15.221 }, 00:10:15.221 { 00:10:15.221 "name": "BaseBdev2", 00:10:15.221 "uuid": "0ef29bcd-b599-4d15-8100-0ce05fb9bdde", 00:10:15.221 "is_configured": true, 00:10:15.221 "data_offset": 2048, 00:10:15.221 "data_size": 63488 00:10:15.221 }, 00:10:15.221 { 00:10:15.221 "name": "BaseBdev3", 00:10:15.221 "uuid": "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87", 00:10:15.221 "is_configured": true, 00:10:15.221 "data_offset": 2048, 00:10:15.221 "data_size": 63488 00:10:15.221 }, 00:10:15.221 { 00:10:15.221 "name": "BaseBdev4", 00:10:15.221 "uuid": "73d220b2-c717-4cdd-a6bd-04295afb0568", 00:10:15.222 "is_configured": true, 00:10:15.222 "data_offset": 2048, 00:10:15.222 "data_size": 63488 00:10:15.222 } 00:10:15.222 ] 00:10:15.222 }' 00:10:15.222 12:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.222 12:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.481 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.481 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.481 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.481 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.481 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.740 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:15.740 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.740 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:15.740 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.740 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.740 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.740 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 37c279f9-cecf-493f-b6ec-35f6e2cf50f3 00:10:15.740 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.740 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.740 [2024-11-26 12:53:33.227070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:15.740 NewBaseBdev 00:10:15.740 [2024-11-26 12:53:33.227349] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:15.740 [2024-11-26 12:53:33.227367] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.740 [2024-11-26 12:53:33.227635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:15.741 [2024-11-26 12:53:33.227749] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:15.741 [2024-11-26 12:53:33.227762] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:15.741 [2024-11-26 12:53:33.227854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.741 [ 00:10:15.741 { 00:10:15.741 "name": "NewBaseBdev", 00:10:15.741 "aliases": [ 00:10:15.741 "37c279f9-cecf-493f-b6ec-35f6e2cf50f3" 00:10:15.741 ], 00:10:15.741 "product_name": "Malloc disk", 00:10:15.741 "block_size": 512, 00:10:15.741 "num_blocks": 65536, 00:10:15.741 "uuid": "37c279f9-cecf-493f-b6ec-35f6e2cf50f3", 00:10:15.741 "assigned_rate_limits": { 00:10:15.741 "rw_ios_per_sec": 0, 00:10:15.741 "rw_mbytes_per_sec": 0, 00:10:15.741 "r_mbytes_per_sec": 0, 00:10:15.741 "w_mbytes_per_sec": 0 00:10:15.741 }, 00:10:15.741 "claimed": true, 00:10:15.741 "claim_type": "exclusive_write", 00:10:15.741 "zoned": false, 00:10:15.741 "supported_io_types": { 00:10:15.741 "read": true, 00:10:15.741 "write": true, 00:10:15.741 "unmap": true, 00:10:15.741 "flush": true, 00:10:15.741 "reset": true, 00:10:15.741 "nvme_admin": false, 00:10:15.741 "nvme_io": false, 00:10:15.741 "nvme_io_md": false, 00:10:15.741 "write_zeroes": true, 00:10:15.741 "zcopy": true, 00:10:15.741 "get_zone_info": false, 00:10:15.741 "zone_management": false, 00:10:15.741 "zone_append": false, 00:10:15.741 "compare": false, 00:10:15.741 "compare_and_write": false, 00:10:15.741 "abort": true, 00:10:15.741 "seek_hole": false, 00:10:15.741 "seek_data": false, 00:10:15.741 "copy": true, 00:10:15.741 "nvme_iov_md": false 00:10:15.741 }, 00:10:15.741 "memory_domains": [ 00:10:15.741 { 00:10:15.741 "dma_device_id": "system", 00:10:15.741 "dma_device_type": 1 00:10:15.741 }, 00:10:15.741 { 00:10:15.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.741 "dma_device_type": 2 00:10:15.741 } 00:10:15.741 ], 00:10:15.741 "driver_specific": {} 00:10:15.741 } 00:10:15.741 ] 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.741 "name": "Existed_Raid", 00:10:15.741 "uuid": "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6", 00:10:15.741 "strip_size_kb": 64, 00:10:15.741 "state": "online", 00:10:15.741 "raid_level": "concat", 00:10:15.741 "superblock": true, 00:10:15.741 "num_base_bdevs": 4, 00:10:15.741 "num_base_bdevs_discovered": 4, 00:10:15.741 "num_base_bdevs_operational": 4, 00:10:15.741 "base_bdevs_list": [ 00:10:15.741 { 00:10:15.741 "name": "NewBaseBdev", 00:10:15.741 "uuid": "37c279f9-cecf-493f-b6ec-35f6e2cf50f3", 00:10:15.741 "is_configured": true, 00:10:15.741 "data_offset": 2048, 00:10:15.741 "data_size": 63488 00:10:15.741 }, 00:10:15.741 { 00:10:15.741 "name": "BaseBdev2", 00:10:15.741 "uuid": "0ef29bcd-b599-4d15-8100-0ce05fb9bdde", 00:10:15.741 "is_configured": true, 00:10:15.741 "data_offset": 2048, 00:10:15.741 "data_size": 63488 00:10:15.741 }, 00:10:15.741 { 00:10:15.741 "name": "BaseBdev3", 00:10:15.741 "uuid": "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87", 00:10:15.741 "is_configured": true, 00:10:15.741 "data_offset": 2048, 00:10:15.741 "data_size": 63488 00:10:15.741 }, 00:10:15.741 { 00:10:15.741 "name": "BaseBdev4", 00:10:15.741 "uuid": "73d220b2-c717-4cdd-a6bd-04295afb0568", 00:10:15.741 "is_configured": true, 00:10:15.741 "data_offset": 2048, 00:10:15.741 "data_size": 63488 00:10:15.741 } 00:10:15.741 ] 00:10:15.741 }' 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.741 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.311 [2024-11-26 12:53:33.698589] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.311 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.311 "name": "Existed_Raid", 00:10:16.311 "aliases": [ 00:10:16.311 "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6" 00:10:16.311 ], 00:10:16.311 "product_name": "Raid Volume", 00:10:16.311 "block_size": 512, 00:10:16.311 "num_blocks": 253952, 00:10:16.311 "uuid": "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6", 00:10:16.311 "assigned_rate_limits": { 00:10:16.311 "rw_ios_per_sec": 0, 00:10:16.311 "rw_mbytes_per_sec": 0, 00:10:16.311 "r_mbytes_per_sec": 0, 00:10:16.311 "w_mbytes_per_sec": 0 00:10:16.311 }, 00:10:16.311 "claimed": false, 00:10:16.311 "zoned": false, 00:10:16.311 "supported_io_types": { 00:10:16.311 "read": true, 00:10:16.311 "write": true, 00:10:16.311 "unmap": true, 00:10:16.311 "flush": true, 00:10:16.311 "reset": true, 00:10:16.311 "nvme_admin": false, 00:10:16.311 "nvme_io": false, 00:10:16.311 "nvme_io_md": false, 00:10:16.311 "write_zeroes": true, 00:10:16.311 "zcopy": false, 00:10:16.311 "get_zone_info": false, 00:10:16.311 "zone_management": false, 00:10:16.311 "zone_append": false, 00:10:16.311 "compare": false, 00:10:16.311 "compare_and_write": false, 00:10:16.311 "abort": false, 00:10:16.311 "seek_hole": false, 00:10:16.311 "seek_data": false, 00:10:16.311 "copy": false, 00:10:16.311 "nvme_iov_md": false 00:10:16.311 }, 00:10:16.311 "memory_domains": [ 00:10:16.311 { 00:10:16.311 "dma_device_id": "system", 00:10:16.311 "dma_device_type": 1 00:10:16.311 }, 00:10:16.311 { 00:10:16.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.311 "dma_device_type": 2 00:10:16.311 }, 00:10:16.311 { 00:10:16.311 "dma_device_id": "system", 00:10:16.311 "dma_device_type": 1 00:10:16.311 }, 00:10:16.311 { 00:10:16.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.311 "dma_device_type": 2 00:10:16.311 }, 00:10:16.311 { 00:10:16.311 "dma_device_id": "system", 00:10:16.311 "dma_device_type": 1 00:10:16.311 }, 00:10:16.311 { 00:10:16.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.311 "dma_device_type": 2 00:10:16.311 }, 00:10:16.311 { 00:10:16.311 "dma_device_id": "system", 00:10:16.311 "dma_device_type": 1 00:10:16.311 }, 00:10:16.311 { 00:10:16.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.311 "dma_device_type": 2 00:10:16.311 } 00:10:16.311 ], 00:10:16.311 "driver_specific": { 00:10:16.311 "raid": { 00:10:16.311 "uuid": "b1c8d78a-0ffe-4d3f-af2d-9461ef3ddef6", 00:10:16.311 "strip_size_kb": 64, 00:10:16.311 "state": "online", 00:10:16.311 "raid_level": "concat", 00:10:16.311 "superblock": true, 00:10:16.311 "num_base_bdevs": 4, 00:10:16.311 "num_base_bdevs_discovered": 4, 00:10:16.311 "num_base_bdevs_operational": 4, 00:10:16.311 "base_bdevs_list": [ 00:10:16.311 { 00:10:16.311 "name": "NewBaseBdev", 00:10:16.311 "uuid": "37c279f9-cecf-493f-b6ec-35f6e2cf50f3", 00:10:16.311 "is_configured": true, 00:10:16.311 "data_offset": 2048, 00:10:16.311 "data_size": 63488 00:10:16.311 }, 00:10:16.311 { 00:10:16.311 "name": "BaseBdev2", 00:10:16.311 "uuid": "0ef29bcd-b599-4d15-8100-0ce05fb9bdde", 00:10:16.311 "is_configured": true, 00:10:16.311 "data_offset": 2048, 00:10:16.311 "data_size": 63488 00:10:16.311 }, 00:10:16.311 { 00:10:16.311 "name": "BaseBdev3", 00:10:16.311 "uuid": "ac8833ac-64ee-4cdd-a7a6-8becdb1c1d87", 00:10:16.311 "is_configured": true, 00:10:16.311 "data_offset": 2048, 00:10:16.311 "data_size": 63488 00:10:16.311 }, 00:10:16.311 { 00:10:16.312 "name": "BaseBdev4", 00:10:16.312 "uuid": "73d220b2-c717-4cdd-a6bd-04295afb0568", 00:10:16.312 "is_configured": true, 00:10:16.312 "data_offset": 2048, 00:10:16.312 "data_size": 63488 00:10:16.312 } 00:10:16.312 ] 00:10:16.312 } 00:10:16.312 } 00:10:16.312 }' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:16.312 BaseBdev2 00:10:16.312 BaseBdev3 00:10:16.312 BaseBdev4' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.312 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.571 12:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.571 [2024-11-26 12:53:34.021721] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.571 [2024-11-26 12:53:34.021794] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.571 [2024-11-26 12:53:34.021887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.571 [2024-11-26 12:53:34.021985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.571 [2024-11-26 12:53:34.022017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83030 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83030 ']' 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83030 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83030 00:10:16.571 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:16.572 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:16.572 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83030' 00:10:16.572 killing process with pid 83030 00:10:16.572 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83030 00:10:16.572 [2024-11-26 12:53:34.069098] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.572 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83030 00:10:16.572 [2024-11-26 12:53:34.109574] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.831 12:53:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:16.831 00:10:16.831 real 0m9.501s 00:10:16.831 user 0m16.256s 00:10:16.831 sys 0m1.963s 00:10:16.831 ************************************ 00:10:16.831 END TEST raid_state_function_test_sb 00:10:16.831 ************************************ 00:10:16.831 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.831 12:53:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.831 12:53:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:16.831 12:53:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:16.831 12:53:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.831 12:53:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.831 ************************************ 00:10:16.831 START TEST raid_superblock_test 00:10:16.831 ************************************ 00:10:16.831 12:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:16.831 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83680 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83680 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83680 ']' 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.832 12:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.092 [2024-11-26 12:53:34.522909] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:17.092 [2024-11-26 12:53:34.523108] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83680 ] 00:10:17.092 [2024-11-26 12:53:34.683914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.092 [2024-11-26 12:53:34.728349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.353 [2024-11-26 12:53:34.770830] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.353 [2024-11-26 12:53:34.770942] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.923 malloc1 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.923 [2024-11-26 12:53:35.377159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.923 [2024-11-26 12:53:35.377321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.923 [2024-11-26 12:53:35.377366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:17.923 [2024-11-26 12:53:35.377405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.923 [2024-11-26 12:53:35.379470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.923 [2024-11-26 12:53:35.379540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.923 pt1 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.923 malloc2 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.923 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.924 [2024-11-26 12:53:35.429418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.924 [2024-11-26 12:53:35.429528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.924 [2024-11-26 12:53:35.429566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:17.924 [2024-11-26 12:53:35.429591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.924 [2024-11-26 12:53:35.434375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.924 [2024-11-26 12:53:35.434449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.924 pt2 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.924 malloc3 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.924 [2024-11-26 12:53:35.460059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.924 [2024-11-26 12:53:35.460178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.924 [2024-11-26 12:53:35.460226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:17.924 [2024-11-26 12:53:35.460267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.924 [2024-11-26 12:53:35.462310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.924 [2024-11-26 12:53:35.462377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.924 pt3 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.924 malloc4 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.924 [2024-11-26 12:53:35.492453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:17.924 [2024-11-26 12:53:35.492555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.924 [2024-11-26 12:53:35.492597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:17.924 [2024-11-26 12:53:35.492644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.924 [2024-11-26 12:53:35.494652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.924 [2024-11-26 12:53:35.494723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:17.924 pt4 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.924 [2024-11-26 12:53:35.504512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.924 [2024-11-26 12:53:35.506355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.924 [2024-11-26 12:53:35.506405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.924 [2024-11-26 12:53:35.506459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:17.924 [2024-11-26 12:53:35.506597] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:17.924 [2024-11-26 12:53:35.506609] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:17.924 [2024-11-26 12:53:35.506825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:17.924 [2024-11-26 12:53:35.506974] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:17.924 [2024-11-26 12:53:35.506984] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:17.924 [2024-11-26 12:53:35.507102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.924 "name": "raid_bdev1", 00:10:17.924 "uuid": "1904fab0-271a-4693-9807-36c1162dca94", 00:10:17.924 "strip_size_kb": 64, 00:10:17.924 "state": "online", 00:10:17.924 "raid_level": "concat", 00:10:17.924 "superblock": true, 00:10:17.924 "num_base_bdevs": 4, 00:10:17.924 "num_base_bdevs_discovered": 4, 00:10:17.924 "num_base_bdevs_operational": 4, 00:10:17.924 "base_bdevs_list": [ 00:10:17.924 { 00:10:17.924 "name": "pt1", 00:10:17.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.924 "is_configured": true, 00:10:17.924 "data_offset": 2048, 00:10:17.924 "data_size": 63488 00:10:17.924 }, 00:10:17.924 { 00:10:17.924 "name": "pt2", 00:10:17.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.924 "is_configured": true, 00:10:17.924 "data_offset": 2048, 00:10:17.924 "data_size": 63488 00:10:17.924 }, 00:10:17.924 { 00:10:17.924 "name": "pt3", 00:10:17.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.924 "is_configured": true, 00:10:17.924 "data_offset": 2048, 00:10:17.924 "data_size": 63488 00:10:17.924 }, 00:10:17.924 { 00:10:17.924 "name": "pt4", 00:10:17.924 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.924 "is_configured": true, 00:10:17.924 "data_offset": 2048, 00:10:17.924 "data_size": 63488 00:10:17.924 } 00:10:17.924 ] 00:10:17.924 }' 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.924 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.494 [2024-11-26 12:53:35.963990] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.494 12:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.494 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.494 "name": "raid_bdev1", 00:10:18.494 "aliases": [ 00:10:18.494 "1904fab0-271a-4693-9807-36c1162dca94" 00:10:18.494 ], 00:10:18.494 "product_name": "Raid Volume", 00:10:18.494 "block_size": 512, 00:10:18.494 "num_blocks": 253952, 00:10:18.494 "uuid": "1904fab0-271a-4693-9807-36c1162dca94", 00:10:18.494 "assigned_rate_limits": { 00:10:18.494 "rw_ios_per_sec": 0, 00:10:18.494 "rw_mbytes_per_sec": 0, 00:10:18.494 "r_mbytes_per_sec": 0, 00:10:18.494 "w_mbytes_per_sec": 0 00:10:18.494 }, 00:10:18.494 "claimed": false, 00:10:18.494 "zoned": false, 00:10:18.494 "supported_io_types": { 00:10:18.494 "read": true, 00:10:18.494 "write": true, 00:10:18.494 "unmap": true, 00:10:18.494 "flush": true, 00:10:18.494 "reset": true, 00:10:18.494 "nvme_admin": false, 00:10:18.494 "nvme_io": false, 00:10:18.494 "nvme_io_md": false, 00:10:18.494 "write_zeroes": true, 00:10:18.494 "zcopy": false, 00:10:18.494 "get_zone_info": false, 00:10:18.494 "zone_management": false, 00:10:18.494 "zone_append": false, 00:10:18.494 "compare": false, 00:10:18.494 "compare_and_write": false, 00:10:18.494 "abort": false, 00:10:18.495 "seek_hole": false, 00:10:18.495 "seek_data": false, 00:10:18.495 "copy": false, 00:10:18.495 "nvme_iov_md": false 00:10:18.495 }, 00:10:18.495 "memory_domains": [ 00:10:18.495 { 00:10:18.495 "dma_device_id": "system", 00:10:18.495 "dma_device_type": 1 00:10:18.495 }, 00:10:18.495 { 00:10:18.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.495 "dma_device_type": 2 00:10:18.495 }, 00:10:18.495 { 00:10:18.495 "dma_device_id": "system", 00:10:18.495 "dma_device_type": 1 00:10:18.495 }, 00:10:18.495 { 00:10:18.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.495 "dma_device_type": 2 00:10:18.495 }, 00:10:18.495 { 00:10:18.495 "dma_device_id": "system", 00:10:18.495 "dma_device_type": 1 00:10:18.495 }, 00:10:18.495 { 00:10:18.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.495 "dma_device_type": 2 00:10:18.495 }, 00:10:18.495 { 00:10:18.495 "dma_device_id": "system", 00:10:18.495 "dma_device_type": 1 00:10:18.495 }, 00:10:18.495 { 00:10:18.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.495 "dma_device_type": 2 00:10:18.495 } 00:10:18.495 ], 00:10:18.495 "driver_specific": { 00:10:18.495 "raid": { 00:10:18.495 "uuid": "1904fab0-271a-4693-9807-36c1162dca94", 00:10:18.495 "strip_size_kb": 64, 00:10:18.495 "state": "online", 00:10:18.495 "raid_level": "concat", 00:10:18.495 "superblock": true, 00:10:18.495 "num_base_bdevs": 4, 00:10:18.495 "num_base_bdevs_discovered": 4, 00:10:18.495 "num_base_bdevs_operational": 4, 00:10:18.495 "base_bdevs_list": [ 00:10:18.495 { 00:10:18.495 "name": "pt1", 00:10:18.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.495 "is_configured": true, 00:10:18.495 "data_offset": 2048, 00:10:18.495 "data_size": 63488 00:10:18.495 }, 00:10:18.495 { 00:10:18.495 "name": "pt2", 00:10:18.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.495 "is_configured": true, 00:10:18.495 "data_offset": 2048, 00:10:18.495 "data_size": 63488 00:10:18.495 }, 00:10:18.495 { 00:10:18.495 "name": "pt3", 00:10:18.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.495 "is_configured": true, 00:10:18.495 "data_offset": 2048, 00:10:18.495 "data_size": 63488 00:10:18.495 }, 00:10:18.495 { 00:10:18.495 "name": "pt4", 00:10:18.495 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.495 "is_configured": true, 00:10:18.495 "data_offset": 2048, 00:10:18.495 "data_size": 63488 00:10:18.495 } 00:10:18.495 ] 00:10:18.495 } 00:10:18.495 } 00:10:18.495 }' 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.495 pt2 00:10:18.495 pt3 00:10:18.495 pt4' 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.495 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.755 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:18.756 [2024-11-26 12:53:36.303386] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1904fab0-271a-4693-9807-36c1162dca94 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1904fab0-271a-4693-9807-36c1162dca94 ']' 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.756 [2024-11-26 12:53:36.351011] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.756 [2024-11-26 12:53:36.351042] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.756 [2024-11-26 12:53:36.351129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.756 [2024-11-26 12:53:36.351219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.756 [2024-11-26 12:53:36.351237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.756 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.016 [2024-11-26 12:53:36.498787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:19.016 [2024-11-26 12:53:36.500734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:19.016 [2024-11-26 12:53:36.500837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:19.016 [2024-11-26 12:53:36.500883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:19.016 [2024-11-26 12:53:36.500982] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:19.016 [2024-11-26 12:53:36.501062] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:19.016 [2024-11-26 12:53:36.501084] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:19.016 [2024-11-26 12:53:36.501100] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:19.016 [2024-11-26 12:53:36.501113] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.016 [2024-11-26 12:53:36.501122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:19.016 request: 00:10:19.016 { 00:10:19.016 "name": "raid_bdev1", 00:10:19.016 "raid_level": "concat", 00:10:19.016 "base_bdevs": [ 00:10:19.016 "malloc1", 00:10:19.016 "malloc2", 00:10:19.016 "malloc3", 00:10:19.016 "malloc4" 00:10:19.016 ], 00:10:19.016 "strip_size_kb": 64, 00:10:19.016 "superblock": false, 00:10:19.016 "method": "bdev_raid_create", 00:10:19.016 "req_id": 1 00:10:19.016 } 00:10:19.016 Got JSON-RPC error response 00:10:19.016 response: 00:10:19.016 { 00:10:19.016 "code": -17, 00:10:19.016 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:19.016 } 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.016 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.016 [2024-11-26 12:53:36.566630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:19.016 [2024-11-26 12:53:36.566674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.016 [2024-11-26 12:53:36.566710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.016 [2024-11-26 12:53:36.566718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.017 [2024-11-26 12:53:36.568843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.017 [2024-11-26 12:53:36.568877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:19.017 [2024-11-26 12:53:36.568938] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:19.017 [2024-11-26 12:53:36.568981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:19.017 pt1 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.017 "name": "raid_bdev1", 00:10:19.017 "uuid": "1904fab0-271a-4693-9807-36c1162dca94", 00:10:19.017 "strip_size_kb": 64, 00:10:19.017 "state": "configuring", 00:10:19.017 "raid_level": "concat", 00:10:19.017 "superblock": true, 00:10:19.017 "num_base_bdevs": 4, 00:10:19.017 "num_base_bdevs_discovered": 1, 00:10:19.017 "num_base_bdevs_operational": 4, 00:10:19.017 "base_bdevs_list": [ 00:10:19.017 { 00:10:19.017 "name": "pt1", 00:10:19.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.017 "is_configured": true, 00:10:19.017 "data_offset": 2048, 00:10:19.017 "data_size": 63488 00:10:19.017 }, 00:10:19.017 { 00:10:19.017 "name": null, 00:10:19.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.017 "is_configured": false, 00:10:19.017 "data_offset": 2048, 00:10:19.017 "data_size": 63488 00:10:19.017 }, 00:10:19.017 { 00:10:19.017 "name": null, 00:10:19.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.017 "is_configured": false, 00:10:19.017 "data_offset": 2048, 00:10:19.017 "data_size": 63488 00:10:19.017 }, 00:10:19.017 { 00:10:19.017 "name": null, 00:10:19.017 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.017 "is_configured": false, 00:10:19.017 "data_offset": 2048, 00:10:19.017 "data_size": 63488 00:10:19.017 } 00:10:19.017 ] 00:10:19.017 }' 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.017 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.586 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:19.586 12:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.586 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.586 12:53:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.586 [2024-11-26 12:53:37.001918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.586 [2024-11-26 12:53:37.001971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.586 [2024-11-26 12:53:37.002007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:19.586 [2024-11-26 12:53:37.002015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.586 [2024-11-26 12:53:37.002392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.586 [2024-11-26 12:53:37.002408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.586 [2024-11-26 12:53:37.002473] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.586 [2024-11-26 12:53:37.002492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.586 pt2 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.586 [2024-11-26 12:53:37.009912] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.586 "name": "raid_bdev1", 00:10:19.586 "uuid": "1904fab0-271a-4693-9807-36c1162dca94", 00:10:19.586 "strip_size_kb": 64, 00:10:19.586 "state": "configuring", 00:10:19.586 "raid_level": "concat", 00:10:19.586 "superblock": true, 00:10:19.586 "num_base_bdevs": 4, 00:10:19.586 "num_base_bdevs_discovered": 1, 00:10:19.586 "num_base_bdevs_operational": 4, 00:10:19.586 "base_bdevs_list": [ 00:10:19.586 { 00:10:19.586 "name": "pt1", 00:10:19.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.586 "is_configured": true, 00:10:19.586 "data_offset": 2048, 00:10:19.586 "data_size": 63488 00:10:19.586 }, 00:10:19.586 { 00:10:19.586 "name": null, 00:10:19.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.586 "is_configured": false, 00:10:19.586 "data_offset": 0, 00:10:19.586 "data_size": 63488 00:10:19.586 }, 00:10:19.586 { 00:10:19.586 "name": null, 00:10:19.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.586 "is_configured": false, 00:10:19.586 "data_offset": 2048, 00:10:19.586 "data_size": 63488 00:10:19.586 }, 00:10:19.586 { 00:10:19.586 "name": null, 00:10:19.586 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.586 "is_configured": false, 00:10:19.586 "data_offset": 2048, 00:10:19.586 "data_size": 63488 00:10:19.586 } 00:10:19.586 ] 00:10:19.586 }' 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.586 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.846 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:19.846 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.846 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.846 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.846 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.846 [2024-11-26 12:53:37.521045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.846 [2024-11-26 12:53:37.521106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.846 [2024-11-26 12:53:37.521122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:19.846 [2024-11-26 12:53:37.521132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.846 [2024-11-26 12:53:37.521527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.846 [2024-11-26 12:53:37.521548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.846 [2024-11-26 12:53:37.521613] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.846 [2024-11-26 12:53:37.521635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.107 pt2 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.107 [2024-11-26 12:53:37.528994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.107 [2024-11-26 12:53:37.529054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.107 [2024-11-26 12:53:37.529070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:20.107 [2024-11-26 12:53:37.529080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.107 [2024-11-26 12:53:37.529402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.107 [2024-11-26 12:53:37.529425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.107 [2024-11-26 12:53:37.529477] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:20.107 [2024-11-26 12:53:37.529496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.107 pt3 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.107 [2024-11-26 12:53:37.537006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:20.107 [2024-11-26 12:53:37.537067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.107 [2024-11-26 12:53:37.537082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:20.107 [2024-11-26 12:53:37.537091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.107 [2024-11-26 12:53:37.537383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.107 [2024-11-26 12:53:37.537413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:20.107 [2024-11-26 12:53:37.537460] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:20.107 [2024-11-26 12:53:37.537479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:20.107 [2024-11-26 12:53:37.537568] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:20.107 [2024-11-26 12:53:37.537581] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.107 [2024-11-26 12:53:37.537805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:20.107 [2024-11-26 12:53:37.537912] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:20.107 [2024-11-26 12:53:37.537920] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:20.107 [2024-11-26 12:53:37.538008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.107 pt4 00:10:20.107 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.108 "name": "raid_bdev1", 00:10:20.108 "uuid": "1904fab0-271a-4693-9807-36c1162dca94", 00:10:20.108 "strip_size_kb": 64, 00:10:20.108 "state": "online", 00:10:20.108 "raid_level": "concat", 00:10:20.108 "superblock": true, 00:10:20.108 "num_base_bdevs": 4, 00:10:20.108 "num_base_bdevs_discovered": 4, 00:10:20.108 "num_base_bdevs_operational": 4, 00:10:20.108 "base_bdevs_list": [ 00:10:20.108 { 00:10:20.108 "name": "pt1", 00:10:20.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.108 "is_configured": true, 00:10:20.108 "data_offset": 2048, 00:10:20.108 "data_size": 63488 00:10:20.108 }, 00:10:20.108 { 00:10:20.108 "name": "pt2", 00:10:20.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.108 "is_configured": true, 00:10:20.108 "data_offset": 2048, 00:10:20.108 "data_size": 63488 00:10:20.108 }, 00:10:20.108 { 00:10:20.108 "name": "pt3", 00:10:20.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.108 "is_configured": true, 00:10:20.108 "data_offset": 2048, 00:10:20.108 "data_size": 63488 00:10:20.108 }, 00:10:20.108 { 00:10:20.108 "name": "pt4", 00:10:20.108 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.108 "is_configured": true, 00:10:20.108 "data_offset": 2048, 00:10:20.108 "data_size": 63488 00:10:20.108 } 00:10:20.108 ] 00:10:20.108 }' 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.108 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.368 [2024-11-26 12:53:37.984552] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.368 12:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.368 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.368 "name": "raid_bdev1", 00:10:20.368 "aliases": [ 00:10:20.368 "1904fab0-271a-4693-9807-36c1162dca94" 00:10:20.368 ], 00:10:20.368 "product_name": "Raid Volume", 00:10:20.368 "block_size": 512, 00:10:20.368 "num_blocks": 253952, 00:10:20.368 "uuid": "1904fab0-271a-4693-9807-36c1162dca94", 00:10:20.368 "assigned_rate_limits": { 00:10:20.368 "rw_ios_per_sec": 0, 00:10:20.368 "rw_mbytes_per_sec": 0, 00:10:20.368 "r_mbytes_per_sec": 0, 00:10:20.368 "w_mbytes_per_sec": 0 00:10:20.368 }, 00:10:20.368 "claimed": false, 00:10:20.368 "zoned": false, 00:10:20.368 "supported_io_types": { 00:10:20.368 "read": true, 00:10:20.368 "write": true, 00:10:20.368 "unmap": true, 00:10:20.368 "flush": true, 00:10:20.368 "reset": true, 00:10:20.368 "nvme_admin": false, 00:10:20.368 "nvme_io": false, 00:10:20.368 "nvme_io_md": false, 00:10:20.368 "write_zeroes": true, 00:10:20.368 "zcopy": false, 00:10:20.368 "get_zone_info": false, 00:10:20.368 "zone_management": false, 00:10:20.368 "zone_append": false, 00:10:20.368 "compare": false, 00:10:20.368 "compare_and_write": false, 00:10:20.368 "abort": false, 00:10:20.368 "seek_hole": false, 00:10:20.368 "seek_data": false, 00:10:20.368 "copy": false, 00:10:20.368 "nvme_iov_md": false 00:10:20.368 }, 00:10:20.368 "memory_domains": [ 00:10:20.368 { 00:10:20.368 "dma_device_id": "system", 00:10:20.368 "dma_device_type": 1 00:10:20.368 }, 00:10:20.368 { 00:10:20.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.368 "dma_device_type": 2 00:10:20.368 }, 00:10:20.368 { 00:10:20.368 "dma_device_id": "system", 00:10:20.368 "dma_device_type": 1 00:10:20.368 }, 00:10:20.368 { 00:10:20.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.368 "dma_device_type": 2 00:10:20.368 }, 00:10:20.368 { 00:10:20.368 "dma_device_id": "system", 00:10:20.368 "dma_device_type": 1 00:10:20.368 }, 00:10:20.368 { 00:10:20.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.368 "dma_device_type": 2 00:10:20.368 }, 00:10:20.368 { 00:10:20.368 "dma_device_id": "system", 00:10:20.368 "dma_device_type": 1 00:10:20.368 }, 00:10:20.368 { 00:10:20.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.368 "dma_device_type": 2 00:10:20.368 } 00:10:20.368 ], 00:10:20.368 "driver_specific": { 00:10:20.368 "raid": { 00:10:20.368 "uuid": "1904fab0-271a-4693-9807-36c1162dca94", 00:10:20.369 "strip_size_kb": 64, 00:10:20.369 "state": "online", 00:10:20.369 "raid_level": "concat", 00:10:20.369 "superblock": true, 00:10:20.369 "num_base_bdevs": 4, 00:10:20.369 "num_base_bdevs_discovered": 4, 00:10:20.369 "num_base_bdevs_operational": 4, 00:10:20.369 "base_bdevs_list": [ 00:10:20.369 { 00:10:20.369 "name": "pt1", 00:10:20.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.369 "is_configured": true, 00:10:20.369 "data_offset": 2048, 00:10:20.369 "data_size": 63488 00:10:20.369 }, 00:10:20.369 { 00:10:20.369 "name": "pt2", 00:10:20.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.369 "is_configured": true, 00:10:20.369 "data_offset": 2048, 00:10:20.369 "data_size": 63488 00:10:20.369 }, 00:10:20.369 { 00:10:20.369 "name": "pt3", 00:10:20.369 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.369 "is_configured": true, 00:10:20.369 "data_offset": 2048, 00:10:20.369 "data_size": 63488 00:10:20.369 }, 00:10:20.369 { 00:10:20.369 "name": "pt4", 00:10:20.369 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.369 "is_configured": true, 00:10:20.369 "data_offset": 2048, 00:10:20.369 "data_size": 63488 00:10:20.369 } 00:10:20.369 ] 00:10:20.369 } 00:10:20.369 } 00:10:20.369 }' 00:10:20.369 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.629 pt2 00:10:20.629 pt3 00:10:20.629 pt4' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.629 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.889 [2024-11-26 12:53:38.307976] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1904fab0-271a-4693-9807-36c1162dca94 '!=' 1904fab0-271a-4693-9807-36c1162dca94 ']' 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83680 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83680 ']' 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83680 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83680 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.889 killing process with pid 83680 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83680' 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83680 00:10:20.889 [2024-11-26 12:53:38.392420] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.889 [2024-11-26 12:53:38.392504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.889 [2024-11-26 12:53:38.392569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.889 [2024-11-26 12:53:38.392580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:20.889 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83680 00:10:20.889 [2024-11-26 12:53:38.435982] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.149 12:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:21.149 ************************************ 00:10:21.149 END TEST raid_superblock_test 00:10:21.149 ************************************ 00:10:21.149 00:10:21.149 real 0m4.246s 00:10:21.149 user 0m6.728s 00:10:21.149 sys 0m0.906s 00:10:21.149 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.149 12:53:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.149 12:53:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:21.149 12:53:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:21.149 12:53:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.149 12:53:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.149 ************************************ 00:10:21.149 START TEST raid_read_error_test 00:10:21.149 ************************************ 00:10:21.149 12:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:21.149 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:21.149 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:21.149 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:21.149 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:21.149 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.149 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.X63pYeb8hq 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83928 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83928 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83928 ']' 00:10:21.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.150 12:53:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.410 [2024-11-26 12:53:38.868287] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:21.410 [2024-11-26 12:53:38.868415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83928 ] 00:10:21.410 [2024-11-26 12:53:39.030923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.410 [2024-11-26 12:53:39.075283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.672 [2024-11-26 12:53:39.117454] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.672 [2024-11-26 12:53:39.117581] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.244 BaseBdev1_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.244 true 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.244 [2024-11-26 12:53:39.715209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:22.244 [2024-11-26 12:53:39.715354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.244 [2024-11-26 12:53:39.715396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:22.244 [2024-11-26 12:53:39.715405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.244 [2024-11-26 12:53:39.717509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.244 [2024-11-26 12:53:39.717544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:22.244 BaseBdev1 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.244 BaseBdev2_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.244 true 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.244 [2024-11-26 12:53:39.770678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:22.244 [2024-11-26 12:53:39.770768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.244 [2024-11-26 12:53:39.770790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:22.244 [2024-11-26 12:53:39.770801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.244 [2024-11-26 12:53:39.773443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.244 [2024-11-26 12:53:39.773482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:22.244 BaseBdev2 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.244 BaseBdev3_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.244 true 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.244 [2024-11-26 12:53:39.810986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:22.244 [2024-11-26 12:53:39.811031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.244 [2024-11-26 12:53:39.811064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:22.244 [2024-11-26 12:53:39.811072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.244 [2024-11-26 12:53:39.813056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.244 [2024-11-26 12:53:39.813092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:22.244 BaseBdev3 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.244 BaseBdev4_malloc 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.244 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.245 true 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.245 [2024-11-26 12:53:39.851399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:22.245 [2024-11-26 12:53:39.851445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.245 [2024-11-26 12:53:39.851465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:22.245 [2024-11-26 12:53:39.851473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.245 [2024-11-26 12:53:39.853419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.245 [2024-11-26 12:53:39.853453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:22.245 BaseBdev4 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.245 [2024-11-26 12:53:39.863432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.245 [2024-11-26 12:53:39.865270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.245 [2024-11-26 12:53:39.865407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.245 [2024-11-26 12:53:39.865478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:22.245 [2024-11-26 12:53:39.865672] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:22.245 [2024-11-26 12:53:39.865683] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:22.245 [2024-11-26 12:53:39.865924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:22.245 [2024-11-26 12:53:39.866045] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:22.245 [2024-11-26 12:53:39.866056] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:22.245 [2024-11-26 12:53:39.866161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.245 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.504 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.504 "name": "raid_bdev1", 00:10:22.504 "uuid": "6f3df096-15e9-419f-b4e6-190961a5a035", 00:10:22.504 "strip_size_kb": 64, 00:10:22.504 "state": "online", 00:10:22.504 "raid_level": "concat", 00:10:22.504 "superblock": true, 00:10:22.504 "num_base_bdevs": 4, 00:10:22.504 "num_base_bdevs_discovered": 4, 00:10:22.504 "num_base_bdevs_operational": 4, 00:10:22.504 "base_bdevs_list": [ 00:10:22.504 { 00:10:22.504 "name": "BaseBdev1", 00:10:22.504 "uuid": "093cce3b-e8b8-5aeb-8bd6-024efa95edf7", 00:10:22.504 "is_configured": true, 00:10:22.504 "data_offset": 2048, 00:10:22.504 "data_size": 63488 00:10:22.504 }, 00:10:22.504 { 00:10:22.504 "name": "BaseBdev2", 00:10:22.504 "uuid": "832b0dbb-688a-50aa-aa23-97c1de183866", 00:10:22.504 "is_configured": true, 00:10:22.504 "data_offset": 2048, 00:10:22.504 "data_size": 63488 00:10:22.504 }, 00:10:22.504 { 00:10:22.504 "name": "BaseBdev3", 00:10:22.504 "uuid": "f8c388b2-638f-512d-874c-815398f1df0e", 00:10:22.504 "is_configured": true, 00:10:22.504 "data_offset": 2048, 00:10:22.504 "data_size": 63488 00:10:22.504 }, 00:10:22.504 { 00:10:22.504 "name": "BaseBdev4", 00:10:22.504 "uuid": "7dab07a4-3bb0-51bb-b078-e019ccb6e7dd", 00:10:22.504 "is_configured": true, 00:10:22.504 "data_offset": 2048, 00:10:22.504 "data_size": 63488 00:10:22.504 } 00:10:22.504 ] 00:10:22.504 }' 00:10:22.504 12:53:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.504 12:53:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.764 12:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:22.764 12:53:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:22.764 [2024-11-26 12:53:40.362898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.703 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.703 "name": "raid_bdev1", 00:10:23.703 "uuid": "6f3df096-15e9-419f-b4e6-190961a5a035", 00:10:23.703 "strip_size_kb": 64, 00:10:23.703 "state": "online", 00:10:23.703 "raid_level": "concat", 00:10:23.704 "superblock": true, 00:10:23.704 "num_base_bdevs": 4, 00:10:23.704 "num_base_bdevs_discovered": 4, 00:10:23.704 "num_base_bdevs_operational": 4, 00:10:23.704 "base_bdevs_list": [ 00:10:23.704 { 00:10:23.704 "name": "BaseBdev1", 00:10:23.704 "uuid": "093cce3b-e8b8-5aeb-8bd6-024efa95edf7", 00:10:23.704 "is_configured": true, 00:10:23.704 "data_offset": 2048, 00:10:23.704 "data_size": 63488 00:10:23.704 }, 00:10:23.704 { 00:10:23.704 "name": "BaseBdev2", 00:10:23.704 "uuid": "832b0dbb-688a-50aa-aa23-97c1de183866", 00:10:23.704 "is_configured": true, 00:10:23.704 "data_offset": 2048, 00:10:23.704 "data_size": 63488 00:10:23.704 }, 00:10:23.704 { 00:10:23.704 "name": "BaseBdev3", 00:10:23.704 "uuid": "f8c388b2-638f-512d-874c-815398f1df0e", 00:10:23.704 "is_configured": true, 00:10:23.704 "data_offset": 2048, 00:10:23.704 "data_size": 63488 00:10:23.704 }, 00:10:23.704 { 00:10:23.704 "name": "BaseBdev4", 00:10:23.704 "uuid": "7dab07a4-3bb0-51bb-b078-e019ccb6e7dd", 00:10:23.704 "is_configured": true, 00:10:23.704 "data_offset": 2048, 00:10:23.704 "data_size": 63488 00:10:23.704 } 00:10:23.704 ] 00:10:23.704 }' 00:10:23.704 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.704 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.274 [2024-11-26 12:53:41.750889] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.274 [2024-11-26 12:53:41.751009] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.274 [2024-11-26 12:53:41.753458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.274 [2024-11-26 12:53:41.753554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.274 [2024-11-26 12:53:41.753618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.274 [2024-11-26 12:53:41.753688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:24.274 { 00:10:24.274 "results": [ 00:10:24.274 { 00:10:24.274 "job": "raid_bdev1", 00:10:24.274 "core_mask": "0x1", 00:10:24.274 "workload": "randrw", 00:10:24.274 "percentage": 50, 00:10:24.274 "status": "finished", 00:10:24.274 "queue_depth": 1, 00:10:24.274 "io_size": 131072, 00:10:24.274 "runtime": 1.388891, 00:10:24.274 "iops": 17472.213442235567, 00:10:24.274 "mibps": 2184.026680279446, 00:10:24.274 "io_failed": 1, 00:10:24.274 "io_timeout": 0, 00:10:24.274 "avg_latency_us": 79.4720382223828, 00:10:24.274 "min_latency_us": 24.370305676855896, 00:10:24.274 "max_latency_us": 1438.071615720524 00:10:24.274 } 00:10:24.274 ], 00:10:24.274 "core_count": 1 00:10:24.274 } 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83928 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83928 ']' 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83928 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83928 00:10:24.274 killing process with pid 83928 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83928' 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83928 00:10:24.274 [2024-11-26 12:53:41.797940] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.274 12:53:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83928 00:10:24.274 [2024-11-26 12:53:41.833638] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.535 12:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.X63pYeb8hq 00:10:24.535 12:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:24.535 12:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:24.535 12:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:24.535 12:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:24.535 12:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.535 12:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.535 12:53:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:24.535 00:10:24.535 real 0m3.320s 00:10:24.535 user 0m4.111s 00:10:24.535 sys 0m0.590s 00:10:24.535 ************************************ 00:10:24.535 END TEST raid_read_error_test 00:10:24.535 ************************************ 00:10:24.535 12:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.535 12:53:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.535 12:53:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:24.535 12:53:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:24.535 12:53:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.535 12:53:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.535 ************************************ 00:10:24.535 START TEST raid_write_error_test 00:10:24.535 ************************************ 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FhaReRX4wL 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84062 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84062 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 84062 ']' 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.535 12:53:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.795 [2024-11-26 12:53:42.266280] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:24.795 [2024-11-26 12:53:42.266427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84062 ] 00:10:24.795 [2024-11-26 12:53:42.433777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.064 [2024-11-26 12:53:42.478877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.064 [2024-11-26 12:53:42.521409] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.064 [2024-11-26 12:53:42.521449] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 BaseBdev1_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 true 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 [2024-11-26 12:53:43.119207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:25.652 [2024-11-26 12:53:43.119271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.652 [2024-11-26 12:53:43.119290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:25.652 [2024-11-26 12:53:43.119298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.652 [2024-11-26 12:53:43.121369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.652 [2024-11-26 12:53:43.121400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:25.652 BaseBdev1 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 BaseBdev2_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 true 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 [2024-11-26 12:53:43.176847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:25.652 [2024-11-26 12:53:43.176901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.652 [2024-11-26 12:53:43.176925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:25.652 [2024-11-26 12:53:43.176936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.652 [2024-11-26 12:53:43.179624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.652 [2024-11-26 12:53:43.179662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:25.652 BaseBdev2 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 BaseBdev3_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 true 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 [2024-11-26 12:53:43.217495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:25.652 [2024-11-26 12:53:43.217539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.652 [2024-11-26 12:53:43.217557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:25.652 [2024-11-26 12:53:43.217566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.652 [2024-11-26 12:53:43.219787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.652 [2024-11-26 12:53:43.219825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:25.652 BaseBdev3 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 BaseBdev4_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 true 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 [2024-11-26 12:53:43.258292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:25.652 [2024-11-26 12:53:43.258333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.652 [2024-11-26 12:53:43.258353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:25.652 [2024-11-26 12:53:43.258362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.652 [2024-11-26 12:53:43.260346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.652 [2024-11-26 12:53:43.260376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:25.652 BaseBdev4 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.652 [2024-11-26 12:53:43.270326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.652 [2024-11-26 12:53:43.272164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.652 [2024-11-26 12:53:43.272278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.652 [2024-11-26 12:53:43.272331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:25.652 [2024-11-26 12:53:43.272519] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:25.652 [2024-11-26 12:53:43.272538] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:25.652 [2024-11-26 12:53:43.272772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:25.652 [2024-11-26 12:53:43.272912] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:25.652 [2024-11-26 12:53:43.272928] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:25.652 [2024-11-26 12:53:43.273055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.652 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.653 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.653 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.653 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.653 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.653 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.653 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.653 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.653 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.653 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.912 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.912 "name": "raid_bdev1", 00:10:25.912 "uuid": "210f9cd0-bbc4-49c1-bc09-660aefa1c491", 00:10:25.912 "strip_size_kb": 64, 00:10:25.912 "state": "online", 00:10:25.912 "raid_level": "concat", 00:10:25.912 "superblock": true, 00:10:25.912 "num_base_bdevs": 4, 00:10:25.912 "num_base_bdevs_discovered": 4, 00:10:25.912 "num_base_bdevs_operational": 4, 00:10:25.912 "base_bdevs_list": [ 00:10:25.912 { 00:10:25.912 "name": "BaseBdev1", 00:10:25.912 "uuid": "d5092d25-5cf4-5dd6-bedc-20a9ac6d42b3", 00:10:25.912 "is_configured": true, 00:10:25.912 "data_offset": 2048, 00:10:25.912 "data_size": 63488 00:10:25.912 }, 00:10:25.912 { 00:10:25.912 "name": "BaseBdev2", 00:10:25.912 "uuid": "249a97d9-beac-5227-b92f-d3556da6a74f", 00:10:25.912 "is_configured": true, 00:10:25.912 "data_offset": 2048, 00:10:25.912 "data_size": 63488 00:10:25.912 }, 00:10:25.912 { 00:10:25.912 "name": "BaseBdev3", 00:10:25.912 "uuid": "1dd12c65-53af-5944-b489-ec9a58c3f9a6", 00:10:25.912 "is_configured": true, 00:10:25.912 "data_offset": 2048, 00:10:25.912 "data_size": 63488 00:10:25.912 }, 00:10:25.912 { 00:10:25.912 "name": "BaseBdev4", 00:10:25.912 "uuid": "a8415443-3b5c-5494-9222-78cc5ee8e446", 00:10:25.912 "is_configured": true, 00:10:25.912 "data_offset": 2048, 00:10:25.912 "data_size": 63488 00:10:25.912 } 00:10:25.912 ] 00:10:25.912 }' 00:10:25.912 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.912 12:53:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.172 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:26.172 12:53:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:26.172 [2024-11-26 12:53:43.785736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:27.111 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:27.111 12:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.112 "name": "raid_bdev1", 00:10:27.112 "uuid": "210f9cd0-bbc4-49c1-bc09-660aefa1c491", 00:10:27.112 "strip_size_kb": 64, 00:10:27.112 "state": "online", 00:10:27.112 "raid_level": "concat", 00:10:27.112 "superblock": true, 00:10:27.112 "num_base_bdevs": 4, 00:10:27.112 "num_base_bdevs_discovered": 4, 00:10:27.112 "num_base_bdevs_operational": 4, 00:10:27.112 "base_bdevs_list": [ 00:10:27.112 { 00:10:27.112 "name": "BaseBdev1", 00:10:27.112 "uuid": "d5092d25-5cf4-5dd6-bedc-20a9ac6d42b3", 00:10:27.112 "is_configured": true, 00:10:27.112 "data_offset": 2048, 00:10:27.112 "data_size": 63488 00:10:27.112 }, 00:10:27.112 { 00:10:27.112 "name": "BaseBdev2", 00:10:27.112 "uuid": "249a97d9-beac-5227-b92f-d3556da6a74f", 00:10:27.112 "is_configured": true, 00:10:27.112 "data_offset": 2048, 00:10:27.112 "data_size": 63488 00:10:27.112 }, 00:10:27.112 { 00:10:27.112 "name": "BaseBdev3", 00:10:27.112 "uuid": "1dd12c65-53af-5944-b489-ec9a58c3f9a6", 00:10:27.112 "is_configured": true, 00:10:27.112 "data_offset": 2048, 00:10:27.112 "data_size": 63488 00:10:27.112 }, 00:10:27.112 { 00:10:27.112 "name": "BaseBdev4", 00:10:27.112 "uuid": "a8415443-3b5c-5494-9222-78cc5ee8e446", 00:10:27.112 "is_configured": true, 00:10:27.112 "data_offset": 2048, 00:10:27.112 "data_size": 63488 00:10:27.112 } 00:10:27.112 ] 00:10:27.112 }' 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.112 12:53:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.682 [2024-11-26 12:53:45.181548] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.682 [2024-11-26 12:53:45.181586] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.682 [2024-11-26 12:53:45.184004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.682 [2024-11-26 12:53:45.184067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.682 [2024-11-26 12:53:45.184111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.682 [2024-11-26 12:53:45.184120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:27.682 { 00:10:27.682 "results": [ 00:10:27.682 { 00:10:27.682 "job": "raid_bdev1", 00:10:27.682 "core_mask": "0x1", 00:10:27.682 "workload": "randrw", 00:10:27.682 "percentage": 50, 00:10:27.682 "status": "finished", 00:10:27.682 "queue_depth": 1, 00:10:27.682 "io_size": 131072, 00:10:27.682 "runtime": 1.396706, 00:10:27.682 "iops": 17280.658921777384, 00:10:27.682 "mibps": 2160.082365222173, 00:10:27.682 "io_failed": 1, 00:10:27.682 "io_timeout": 0, 00:10:27.682 "avg_latency_us": 80.29946956718861, 00:10:27.682 "min_latency_us": 24.258515283842794, 00:10:27.682 "max_latency_us": 1359.3711790393013 00:10:27.682 } 00:10:27.682 ], 00:10:27.682 "core_count": 1 00:10:27.682 } 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84062 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 84062 ']' 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 84062 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84062 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:27.682 killing process with pid 84062 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84062' 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 84062 00:10:27.682 [2024-11-26 12:53:45.215964] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.682 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 84062 00:10:27.682 [2024-11-26 12:53:45.250275] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.943 12:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:27.943 12:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:27.943 12:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FhaReRX4wL 00:10:27.943 12:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:27.943 12:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:27.943 12:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.943 12:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.943 12:53:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:27.943 00:10:27.943 real 0m3.339s 00:10:27.943 user 0m4.178s 00:10:27.943 sys 0m0.569s 00:10:27.943 ************************************ 00:10:27.943 END TEST raid_write_error_test 00:10:27.943 ************************************ 00:10:27.943 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.943 12:53:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.943 12:53:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:27.943 12:53:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:27.943 12:53:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:27.943 12:53:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.943 12:53:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.943 ************************************ 00:10:27.943 START TEST raid_state_function_test 00:10:27.943 ************************************ 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84195 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:27.943 Process raid pid: 84195 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84195' 00:10:27.943 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84195 00:10:27.944 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 84195 ']' 00:10:27.944 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.944 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.944 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.944 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.944 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.204 [2024-11-26 12:53:45.657778] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:28.204 [2024-11-26 12:53:45.658290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.204 [2024-11-26 12:53:45.819111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.204 [2024-11-26 12:53:45.863957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.463 [2024-11-26 12:53:45.906387] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.463 [2024-11-26 12:53:45.906425] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.031 [2024-11-26 12:53:46.507640] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.031 [2024-11-26 12:53:46.507846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.031 [2024-11-26 12:53:46.507872] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.031 [2024-11-26 12:53:46.507934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.031 [2024-11-26 12:53:46.507945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.031 [2024-11-26 12:53:46.508021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.031 [2024-11-26 12:53:46.508034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:29.031 [2024-11-26 12:53:46.508088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.031 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.031 "name": "Existed_Raid", 00:10:29.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.032 "strip_size_kb": 0, 00:10:29.032 "state": "configuring", 00:10:29.032 "raid_level": "raid1", 00:10:29.032 "superblock": false, 00:10:29.032 "num_base_bdevs": 4, 00:10:29.032 "num_base_bdevs_discovered": 0, 00:10:29.032 "num_base_bdevs_operational": 4, 00:10:29.032 "base_bdevs_list": [ 00:10:29.032 { 00:10:29.032 "name": "BaseBdev1", 00:10:29.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.032 "is_configured": false, 00:10:29.032 "data_offset": 0, 00:10:29.032 "data_size": 0 00:10:29.032 }, 00:10:29.032 { 00:10:29.032 "name": "BaseBdev2", 00:10:29.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.032 "is_configured": false, 00:10:29.032 "data_offset": 0, 00:10:29.032 "data_size": 0 00:10:29.032 }, 00:10:29.032 { 00:10:29.032 "name": "BaseBdev3", 00:10:29.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.032 "is_configured": false, 00:10:29.032 "data_offset": 0, 00:10:29.032 "data_size": 0 00:10:29.032 }, 00:10:29.032 { 00:10:29.032 "name": "BaseBdev4", 00:10:29.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.032 "is_configured": false, 00:10:29.032 "data_offset": 0, 00:10:29.032 "data_size": 0 00:10:29.032 } 00:10:29.032 ] 00:10:29.032 }' 00:10:29.032 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.032 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.292 [2024-11-26 12:53:46.930816] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.292 [2024-11-26 12:53:46.930856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.292 [2024-11-26 12:53:46.942832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.292 [2024-11-26 12:53:46.943161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.292 [2024-11-26 12:53:46.943194] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.292 [2024-11-26 12:53:46.943273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.292 [2024-11-26 12:53:46.943282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.292 [2024-11-26 12:53:46.943324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.292 [2024-11-26 12:53:46.943333] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:29.292 [2024-11-26 12:53:46.943390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.292 [2024-11-26 12:53:46.963568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.292 BaseBdev1 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.292 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.552 [ 00:10:29.552 { 00:10:29.552 "name": "BaseBdev1", 00:10:29.552 "aliases": [ 00:10:29.552 "f0407c38-780f-4ef5-a151-d56a5fe7bc73" 00:10:29.552 ], 00:10:29.552 "product_name": "Malloc disk", 00:10:29.552 "block_size": 512, 00:10:29.552 "num_blocks": 65536, 00:10:29.552 "uuid": "f0407c38-780f-4ef5-a151-d56a5fe7bc73", 00:10:29.552 "assigned_rate_limits": { 00:10:29.552 "rw_ios_per_sec": 0, 00:10:29.552 "rw_mbytes_per_sec": 0, 00:10:29.552 "r_mbytes_per_sec": 0, 00:10:29.552 "w_mbytes_per_sec": 0 00:10:29.552 }, 00:10:29.552 "claimed": true, 00:10:29.552 "claim_type": "exclusive_write", 00:10:29.552 "zoned": false, 00:10:29.552 "supported_io_types": { 00:10:29.552 "read": true, 00:10:29.552 "write": true, 00:10:29.552 "unmap": true, 00:10:29.552 "flush": true, 00:10:29.552 "reset": true, 00:10:29.552 "nvme_admin": false, 00:10:29.552 "nvme_io": false, 00:10:29.552 "nvme_io_md": false, 00:10:29.552 "write_zeroes": true, 00:10:29.552 "zcopy": true, 00:10:29.552 "get_zone_info": false, 00:10:29.552 "zone_management": false, 00:10:29.552 "zone_append": false, 00:10:29.552 "compare": false, 00:10:29.552 "compare_and_write": false, 00:10:29.552 "abort": true, 00:10:29.552 "seek_hole": false, 00:10:29.552 "seek_data": false, 00:10:29.552 "copy": true, 00:10:29.552 "nvme_iov_md": false 00:10:29.552 }, 00:10:29.552 "memory_domains": [ 00:10:29.552 { 00:10:29.552 "dma_device_id": "system", 00:10:29.552 "dma_device_type": 1 00:10:29.552 }, 00:10:29.552 { 00:10:29.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.552 "dma_device_type": 2 00:10:29.552 } 00:10:29.552 ], 00:10:29.552 "driver_specific": {} 00:10:29.552 } 00:10:29.552 ] 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.552 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.552 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.552 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.552 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.552 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.552 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.552 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.552 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.552 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.552 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.552 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.552 "name": "Existed_Raid", 00:10:29.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.552 "strip_size_kb": 0, 00:10:29.552 "state": "configuring", 00:10:29.552 "raid_level": "raid1", 00:10:29.552 "superblock": false, 00:10:29.552 "num_base_bdevs": 4, 00:10:29.552 "num_base_bdevs_discovered": 1, 00:10:29.552 "num_base_bdevs_operational": 4, 00:10:29.552 "base_bdevs_list": [ 00:10:29.552 { 00:10:29.552 "name": "BaseBdev1", 00:10:29.553 "uuid": "f0407c38-780f-4ef5-a151-d56a5fe7bc73", 00:10:29.553 "is_configured": true, 00:10:29.553 "data_offset": 0, 00:10:29.553 "data_size": 65536 00:10:29.553 }, 00:10:29.553 { 00:10:29.553 "name": "BaseBdev2", 00:10:29.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.553 "is_configured": false, 00:10:29.553 "data_offset": 0, 00:10:29.553 "data_size": 0 00:10:29.553 }, 00:10:29.553 { 00:10:29.553 "name": "BaseBdev3", 00:10:29.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.553 "is_configured": false, 00:10:29.553 "data_offset": 0, 00:10:29.553 "data_size": 0 00:10:29.553 }, 00:10:29.553 { 00:10:29.553 "name": "BaseBdev4", 00:10:29.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.553 "is_configured": false, 00:10:29.553 "data_offset": 0, 00:10:29.553 "data_size": 0 00:10:29.553 } 00:10:29.553 ] 00:10:29.553 }' 00:10:29.553 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.553 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.813 [2024-11-26 12:53:47.438789] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.813 [2024-11-26 12:53:47.438929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.813 [2024-11-26 12:53:47.450802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.813 [2024-11-26 12:53:47.452656] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.813 [2024-11-26 12:53:47.452898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.813 [2024-11-26 12:53:47.452940] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.813 [2024-11-26 12:53:47.452965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.813 [2024-11-26 12:53:47.452994] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:29.813 [2024-11-26 12:53:47.453016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.813 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.072 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.072 "name": "Existed_Raid", 00:10:30.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.072 "strip_size_kb": 0, 00:10:30.072 "state": "configuring", 00:10:30.072 "raid_level": "raid1", 00:10:30.072 "superblock": false, 00:10:30.072 "num_base_bdevs": 4, 00:10:30.072 "num_base_bdevs_discovered": 1, 00:10:30.072 "num_base_bdevs_operational": 4, 00:10:30.072 "base_bdevs_list": [ 00:10:30.072 { 00:10:30.072 "name": "BaseBdev1", 00:10:30.072 "uuid": "f0407c38-780f-4ef5-a151-d56a5fe7bc73", 00:10:30.072 "is_configured": true, 00:10:30.072 "data_offset": 0, 00:10:30.072 "data_size": 65536 00:10:30.072 }, 00:10:30.072 { 00:10:30.072 "name": "BaseBdev2", 00:10:30.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.072 "is_configured": false, 00:10:30.072 "data_offset": 0, 00:10:30.072 "data_size": 0 00:10:30.072 }, 00:10:30.072 { 00:10:30.072 "name": "BaseBdev3", 00:10:30.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.072 "is_configured": false, 00:10:30.072 "data_offset": 0, 00:10:30.072 "data_size": 0 00:10:30.072 }, 00:10:30.072 { 00:10:30.072 "name": "BaseBdev4", 00:10:30.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.072 "is_configured": false, 00:10:30.072 "data_offset": 0, 00:10:30.072 "data_size": 0 00:10:30.072 } 00:10:30.072 ] 00:10:30.072 }' 00:10:30.072 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.072 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.332 [2024-11-26 12:53:47.908518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.332 BaseBdev2 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.332 [ 00:10:30.332 { 00:10:30.332 "name": "BaseBdev2", 00:10:30.332 "aliases": [ 00:10:30.332 "101cb6c3-d9e8-4804-98a5-758a765610c3" 00:10:30.332 ], 00:10:30.332 "product_name": "Malloc disk", 00:10:30.332 "block_size": 512, 00:10:30.332 "num_blocks": 65536, 00:10:30.332 "uuid": "101cb6c3-d9e8-4804-98a5-758a765610c3", 00:10:30.332 "assigned_rate_limits": { 00:10:30.332 "rw_ios_per_sec": 0, 00:10:30.332 "rw_mbytes_per_sec": 0, 00:10:30.332 "r_mbytes_per_sec": 0, 00:10:30.332 "w_mbytes_per_sec": 0 00:10:30.332 }, 00:10:30.332 "claimed": true, 00:10:30.332 "claim_type": "exclusive_write", 00:10:30.332 "zoned": false, 00:10:30.332 "supported_io_types": { 00:10:30.332 "read": true, 00:10:30.332 "write": true, 00:10:30.332 "unmap": true, 00:10:30.332 "flush": true, 00:10:30.332 "reset": true, 00:10:30.332 "nvme_admin": false, 00:10:30.332 "nvme_io": false, 00:10:30.332 "nvme_io_md": false, 00:10:30.332 "write_zeroes": true, 00:10:30.332 "zcopy": true, 00:10:30.332 "get_zone_info": false, 00:10:30.332 "zone_management": false, 00:10:30.332 "zone_append": false, 00:10:30.332 "compare": false, 00:10:30.332 "compare_and_write": false, 00:10:30.332 "abort": true, 00:10:30.332 "seek_hole": false, 00:10:30.332 "seek_data": false, 00:10:30.332 "copy": true, 00:10:30.332 "nvme_iov_md": false 00:10:30.332 }, 00:10:30.332 "memory_domains": [ 00:10:30.332 { 00:10:30.332 "dma_device_id": "system", 00:10:30.332 "dma_device_type": 1 00:10:30.332 }, 00:10:30.332 { 00:10:30.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.332 "dma_device_type": 2 00:10:30.332 } 00:10:30.332 ], 00:10:30.332 "driver_specific": {} 00:10:30.332 } 00:10:30.332 ] 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.332 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.332 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.332 "name": "Existed_Raid", 00:10:30.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.332 "strip_size_kb": 0, 00:10:30.332 "state": "configuring", 00:10:30.332 "raid_level": "raid1", 00:10:30.332 "superblock": false, 00:10:30.332 "num_base_bdevs": 4, 00:10:30.332 "num_base_bdevs_discovered": 2, 00:10:30.332 "num_base_bdevs_operational": 4, 00:10:30.332 "base_bdevs_list": [ 00:10:30.332 { 00:10:30.332 "name": "BaseBdev1", 00:10:30.332 "uuid": "f0407c38-780f-4ef5-a151-d56a5fe7bc73", 00:10:30.332 "is_configured": true, 00:10:30.332 "data_offset": 0, 00:10:30.332 "data_size": 65536 00:10:30.332 }, 00:10:30.332 { 00:10:30.332 "name": "BaseBdev2", 00:10:30.332 "uuid": "101cb6c3-d9e8-4804-98a5-758a765610c3", 00:10:30.332 "is_configured": true, 00:10:30.332 "data_offset": 0, 00:10:30.332 "data_size": 65536 00:10:30.332 }, 00:10:30.332 { 00:10:30.332 "name": "BaseBdev3", 00:10:30.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.332 "is_configured": false, 00:10:30.332 "data_offset": 0, 00:10:30.332 "data_size": 0 00:10:30.332 }, 00:10:30.332 { 00:10:30.332 "name": "BaseBdev4", 00:10:30.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.332 "is_configured": false, 00:10:30.332 "data_offset": 0, 00:10:30.332 "data_size": 0 00:10:30.332 } 00:10:30.332 ] 00:10:30.332 }' 00:10:30.332 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.332 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.903 [2024-11-26 12:53:48.334750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.903 BaseBdev3 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.903 [ 00:10:30.903 { 00:10:30.903 "name": "BaseBdev3", 00:10:30.903 "aliases": [ 00:10:30.903 "28b9a219-1380-4856-900b-fb57741cb051" 00:10:30.903 ], 00:10:30.903 "product_name": "Malloc disk", 00:10:30.903 "block_size": 512, 00:10:30.903 "num_blocks": 65536, 00:10:30.903 "uuid": "28b9a219-1380-4856-900b-fb57741cb051", 00:10:30.903 "assigned_rate_limits": { 00:10:30.903 "rw_ios_per_sec": 0, 00:10:30.903 "rw_mbytes_per_sec": 0, 00:10:30.903 "r_mbytes_per_sec": 0, 00:10:30.903 "w_mbytes_per_sec": 0 00:10:30.903 }, 00:10:30.903 "claimed": true, 00:10:30.903 "claim_type": "exclusive_write", 00:10:30.903 "zoned": false, 00:10:30.903 "supported_io_types": { 00:10:30.903 "read": true, 00:10:30.903 "write": true, 00:10:30.903 "unmap": true, 00:10:30.903 "flush": true, 00:10:30.903 "reset": true, 00:10:30.903 "nvme_admin": false, 00:10:30.903 "nvme_io": false, 00:10:30.903 "nvme_io_md": false, 00:10:30.903 "write_zeroes": true, 00:10:30.903 "zcopy": true, 00:10:30.903 "get_zone_info": false, 00:10:30.903 "zone_management": false, 00:10:30.903 "zone_append": false, 00:10:30.903 "compare": false, 00:10:30.903 "compare_and_write": false, 00:10:30.903 "abort": true, 00:10:30.903 "seek_hole": false, 00:10:30.903 "seek_data": false, 00:10:30.903 "copy": true, 00:10:30.903 "nvme_iov_md": false 00:10:30.903 }, 00:10:30.903 "memory_domains": [ 00:10:30.903 { 00:10:30.903 "dma_device_id": "system", 00:10:30.903 "dma_device_type": 1 00:10:30.903 }, 00:10:30.903 { 00:10:30.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.903 "dma_device_type": 2 00:10:30.903 } 00:10:30.903 ], 00:10:30.903 "driver_specific": {} 00:10:30.903 } 00:10:30.903 ] 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.903 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.903 "name": "Existed_Raid", 00:10:30.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.903 "strip_size_kb": 0, 00:10:30.903 "state": "configuring", 00:10:30.903 "raid_level": "raid1", 00:10:30.903 "superblock": false, 00:10:30.903 "num_base_bdevs": 4, 00:10:30.903 "num_base_bdevs_discovered": 3, 00:10:30.903 "num_base_bdevs_operational": 4, 00:10:30.903 "base_bdevs_list": [ 00:10:30.903 { 00:10:30.903 "name": "BaseBdev1", 00:10:30.903 "uuid": "f0407c38-780f-4ef5-a151-d56a5fe7bc73", 00:10:30.903 "is_configured": true, 00:10:30.903 "data_offset": 0, 00:10:30.903 "data_size": 65536 00:10:30.903 }, 00:10:30.903 { 00:10:30.903 "name": "BaseBdev2", 00:10:30.903 "uuid": "101cb6c3-d9e8-4804-98a5-758a765610c3", 00:10:30.903 "is_configured": true, 00:10:30.903 "data_offset": 0, 00:10:30.903 "data_size": 65536 00:10:30.903 }, 00:10:30.903 { 00:10:30.903 "name": "BaseBdev3", 00:10:30.903 "uuid": "28b9a219-1380-4856-900b-fb57741cb051", 00:10:30.903 "is_configured": true, 00:10:30.904 "data_offset": 0, 00:10:30.904 "data_size": 65536 00:10:30.904 }, 00:10:30.904 { 00:10:30.904 "name": "BaseBdev4", 00:10:30.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.904 "is_configured": false, 00:10:30.904 "data_offset": 0, 00:10:30.904 "data_size": 0 00:10:30.904 } 00:10:30.904 ] 00:10:30.904 }' 00:10:30.904 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.904 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.164 [2024-11-26 12:53:48.828989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:31.164 [2024-11-26 12:53:48.829141] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:31.164 [2024-11-26 12:53:48.829171] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:31.164 [2024-11-26 12:53:48.829535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:31.164 [2024-11-26 12:53:48.829728] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:31.164 [2024-11-26 12:53:48.829772] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:31.164 [2024-11-26 12:53:48.830018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.164 BaseBdev4 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.164 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.165 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.165 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.424 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.424 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:31.424 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.424 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.424 [ 00:10:31.424 { 00:10:31.424 "name": "BaseBdev4", 00:10:31.424 "aliases": [ 00:10:31.424 "01b55887-517f-4ead-b86a-f978af31522d" 00:10:31.424 ], 00:10:31.424 "product_name": "Malloc disk", 00:10:31.424 "block_size": 512, 00:10:31.424 "num_blocks": 65536, 00:10:31.424 "uuid": "01b55887-517f-4ead-b86a-f978af31522d", 00:10:31.424 "assigned_rate_limits": { 00:10:31.424 "rw_ios_per_sec": 0, 00:10:31.424 "rw_mbytes_per_sec": 0, 00:10:31.424 "r_mbytes_per_sec": 0, 00:10:31.424 "w_mbytes_per_sec": 0 00:10:31.424 }, 00:10:31.424 "claimed": true, 00:10:31.425 "claim_type": "exclusive_write", 00:10:31.425 "zoned": false, 00:10:31.425 "supported_io_types": { 00:10:31.425 "read": true, 00:10:31.425 "write": true, 00:10:31.425 "unmap": true, 00:10:31.425 "flush": true, 00:10:31.425 "reset": true, 00:10:31.425 "nvme_admin": false, 00:10:31.425 "nvme_io": false, 00:10:31.425 "nvme_io_md": false, 00:10:31.425 "write_zeroes": true, 00:10:31.425 "zcopy": true, 00:10:31.425 "get_zone_info": false, 00:10:31.425 "zone_management": false, 00:10:31.425 "zone_append": false, 00:10:31.425 "compare": false, 00:10:31.425 "compare_and_write": false, 00:10:31.425 "abort": true, 00:10:31.425 "seek_hole": false, 00:10:31.425 "seek_data": false, 00:10:31.425 "copy": true, 00:10:31.425 "nvme_iov_md": false 00:10:31.425 }, 00:10:31.425 "memory_domains": [ 00:10:31.425 { 00:10:31.425 "dma_device_id": "system", 00:10:31.425 "dma_device_type": 1 00:10:31.425 }, 00:10:31.425 { 00:10:31.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.425 "dma_device_type": 2 00:10:31.425 } 00:10:31.425 ], 00:10:31.425 "driver_specific": {} 00:10:31.425 } 00:10:31.425 ] 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.425 "name": "Existed_Raid", 00:10:31.425 "uuid": "7aae6cc2-f785-491a-8642-3b04d5cbb7ad", 00:10:31.425 "strip_size_kb": 0, 00:10:31.425 "state": "online", 00:10:31.425 "raid_level": "raid1", 00:10:31.425 "superblock": false, 00:10:31.425 "num_base_bdevs": 4, 00:10:31.425 "num_base_bdevs_discovered": 4, 00:10:31.425 "num_base_bdevs_operational": 4, 00:10:31.425 "base_bdevs_list": [ 00:10:31.425 { 00:10:31.425 "name": "BaseBdev1", 00:10:31.425 "uuid": "f0407c38-780f-4ef5-a151-d56a5fe7bc73", 00:10:31.425 "is_configured": true, 00:10:31.425 "data_offset": 0, 00:10:31.425 "data_size": 65536 00:10:31.425 }, 00:10:31.425 { 00:10:31.425 "name": "BaseBdev2", 00:10:31.425 "uuid": "101cb6c3-d9e8-4804-98a5-758a765610c3", 00:10:31.425 "is_configured": true, 00:10:31.425 "data_offset": 0, 00:10:31.425 "data_size": 65536 00:10:31.425 }, 00:10:31.425 { 00:10:31.425 "name": "BaseBdev3", 00:10:31.425 "uuid": "28b9a219-1380-4856-900b-fb57741cb051", 00:10:31.425 "is_configured": true, 00:10:31.425 "data_offset": 0, 00:10:31.425 "data_size": 65536 00:10:31.425 }, 00:10:31.425 { 00:10:31.425 "name": "BaseBdev4", 00:10:31.425 "uuid": "01b55887-517f-4ead-b86a-f978af31522d", 00:10:31.425 "is_configured": true, 00:10:31.425 "data_offset": 0, 00:10:31.425 "data_size": 65536 00:10:31.425 } 00:10:31.425 ] 00:10:31.425 }' 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.425 12:53:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:31.685 [2024-11-26 12:53:49.304506] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.685 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:31.685 "name": "Existed_Raid", 00:10:31.685 "aliases": [ 00:10:31.685 "7aae6cc2-f785-491a-8642-3b04d5cbb7ad" 00:10:31.685 ], 00:10:31.685 "product_name": "Raid Volume", 00:10:31.685 "block_size": 512, 00:10:31.686 "num_blocks": 65536, 00:10:31.686 "uuid": "7aae6cc2-f785-491a-8642-3b04d5cbb7ad", 00:10:31.686 "assigned_rate_limits": { 00:10:31.686 "rw_ios_per_sec": 0, 00:10:31.686 "rw_mbytes_per_sec": 0, 00:10:31.686 "r_mbytes_per_sec": 0, 00:10:31.686 "w_mbytes_per_sec": 0 00:10:31.686 }, 00:10:31.686 "claimed": false, 00:10:31.686 "zoned": false, 00:10:31.686 "supported_io_types": { 00:10:31.686 "read": true, 00:10:31.686 "write": true, 00:10:31.686 "unmap": false, 00:10:31.686 "flush": false, 00:10:31.686 "reset": true, 00:10:31.686 "nvme_admin": false, 00:10:31.686 "nvme_io": false, 00:10:31.686 "nvme_io_md": false, 00:10:31.686 "write_zeroes": true, 00:10:31.686 "zcopy": false, 00:10:31.686 "get_zone_info": false, 00:10:31.686 "zone_management": false, 00:10:31.686 "zone_append": false, 00:10:31.686 "compare": false, 00:10:31.686 "compare_and_write": false, 00:10:31.686 "abort": false, 00:10:31.686 "seek_hole": false, 00:10:31.686 "seek_data": false, 00:10:31.686 "copy": false, 00:10:31.686 "nvme_iov_md": false 00:10:31.686 }, 00:10:31.686 "memory_domains": [ 00:10:31.686 { 00:10:31.686 "dma_device_id": "system", 00:10:31.686 "dma_device_type": 1 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.686 "dma_device_type": 2 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "dma_device_id": "system", 00:10:31.686 "dma_device_type": 1 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.686 "dma_device_type": 2 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "dma_device_id": "system", 00:10:31.686 "dma_device_type": 1 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.686 "dma_device_type": 2 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "dma_device_id": "system", 00:10:31.686 "dma_device_type": 1 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.686 "dma_device_type": 2 00:10:31.686 } 00:10:31.686 ], 00:10:31.686 "driver_specific": { 00:10:31.686 "raid": { 00:10:31.686 "uuid": "7aae6cc2-f785-491a-8642-3b04d5cbb7ad", 00:10:31.686 "strip_size_kb": 0, 00:10:31.686 "state": "online", 00:10:31.686 "raid_level": "raid1", 00:10:31.686 "superblock": false, 00:10:31.686 "num_base_bdevs": 4, 00:10:31.686 "num_base_bdevs_discovered": 4, 00:10:31.686 "num_base_bdevs_operational": 4, 00:10:31.686 "base_bdevs_list": [ 00:10:31.686 { 00:10:31.686 "name": "BaseBdev1", 00:10:31.686 "uuid": "f0407c38-780f-4ef5-a151-d56a5fe7bc73", 00:10:31.686 "is_configured": true, 00:10:31.686 "data_offset": 0, 00:10:31.686 "data_size": 65536 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "name": "BaseBdev2", 00:10:31.686 "uuid": "101cb6c3-d9e8-4804-98a5-758a765610c3", 00:10:31.686 "is_configured": true, 00:10:31.686 "data_offset": 0, 00:10:31.686 "data_size": 65536 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "name": "BaseBdev3", 00:10:31.686 "uuid": "28b9a219-1380-4856-900b-fb57741cb051", 00:10:31.686 "is_configured": true, 00:10:31.686 "data_offset": 0, 00:10:31.686 "data_size": 65536 00:10:31.686 }, 00:10:31.686 { 00:10:31.686 "name": "BaseBdev4", 00:10:31.686 "uuid": "01b55887-517f-4ead-b86a-f978af31522d", 00:10:31.686 "is_configured": true, 00:10:31.686 "data_offset": 0, 00:10:31.686 "data_size": 65536 00:10:31.686 } 00:10:31.686 ] 00:10:31.686 } 00:10:31.686 } 00:10:31.686 }' 00:10:31.686 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.947 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:31.947 BaseBdev2 00:10:31.947 BaseBdev3 00:10:31.947 BaseBdev4' 00:10:31.947 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.947 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:31.947 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.947 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:31.947 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.947 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.947 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.948 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 [2024-11-26 12:53:49.623689] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.208 "name": "Existed_Raid", 00:10:32.208 "uuid": "7aae6cc2-f785-491a-8642-3b04d5cbb7ad", 00:10:32.208 "strip_size_kb": 0, 00:10:32.208 "state": "online", 00:10:32.208 "raid_level": "raid1", 00:10:32.208 "superblock": false, 00:10:32.208 "num_base_bdevs": 4, 00:10:32.208 "num_base_bdevs_discovered": 3, 00:10:32.208 "num_base_bdevs_operational": 3, 00:10:32.208 "base_bdevs_list": [ 00:10:32.208 { 00:10:32.208 "name": null, 00:10:32.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.208 "is_configured": false, 00:10:32.208 "data_offset": 0, 00:10:32.208 "data_size": 65536 00:10:32.208 }, 00:10:32.208 { 00:10:32.208 "name": "BaseBdev2", 00:10:32.208 "uuid": "101cb6c3-d9e8-4804-98a5-758a765610c3", 00:10:32.208 "is_configured": true, 00:10:32.208 "data_offset": 0, 00:10:32.208 "data_size": 65536 00:10:32.208 }, 00:10:32.208 { 00:10:32.208 "name": "BaseBdev3", 00:10:32.208 "uuid": "28b9a219-1380-4856-900b-fb57741cb051", 00:10:32.208 "is_configured": true, 00:10:32.208 "data_offset": 0, 00:10:32.208 "data_size": 65536 00:10:32.208 }, 00:10:32.208 { 00:10:32.208 "name": "BaseBdev4", 00:10:32.208 "uuid": "01b55887-517f-4ead-b86a-f978af31522d", 00:10:32.208 "is_configured": true, 00:10:32.208 "data_offset": 0, 00:10:32.208 "data_size": 65536 00:10:32.208 } 00:10:32.208 ] 00:10:32.208 }' 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.208 12:53:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.468 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:32.468 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:32.468 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:32.468 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.468 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.468 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.468 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.729 [2024-11-26 12:53:50.158201] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.729 [2024-11-26 12:53:50.229240] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.729 [2024-11-26 12:53:50.283812] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:32.729 [2024-11-26 12:53:50.283943] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.729 [2024-11-26 12:53:50.295406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.729 [2024-11-26 12:53:50.295514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.729 [2024-11-26 12:53:50.295557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:32.729 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.730 BaseBdev2 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.730 [ 00:10:32.730 { 00:10:32.730 "name": "BaseBdev2", 00:10:32.730 "aliases": [ 00:10:32.730 "660ed0f2-801c-4555-8223-cab0a1ffec4c" 00:10:32.730 ], 00:10:32.730 "product_name": "Malloc disk", 00:10:32.730 "block_size": 512, 00:10:32.730 "num_blocks": 65536, 00:10:32.730 "uuid": "660ed0f2-801c-4555-8223-cab0a1ffec4c", 00:10:32.730 "assigned_rate_limits": { 00:10:32.730 "rw_ios_per_sec": 0, 00:10:32.730 "rw_mbytes_per_sec": 0, 00:10:32.730 "r_mbytes_per_sec": 0, 00:10:32.730 "w_mbytes_per_sec": 0 00:10:32.730 }, 00:10:32.730 "claimed": false, 00:10:32.730 "zoned": false, 00:10:32.730 "supported_io_types": { 00:10:32.730 "read": true, 00:10:32.730 "write": true, 00:10:32.730 "unmap": true, 00:10:32.730 "flush": true, 00:10:32.730 "reset": true, 00:10:32.730 "nvme_admin": false, 00:10:32.730 "nvme_io": false, 00:10:32.730 "nvme_io_md": false, 00:10:32.730 "write_zeroes": true, 00:10:32.730 "zcopy": true, 00:10:32.730 "get_zone_info": false, 00:10:32.730 "zone_management": false, 00:10:32.730 "zone_append": false, 00:10:32.730 "compare": false, 00:10:32.730 "compare_and_write": false, 00:10:32.730 "abort": true, 00:10:32.730 "seek_hole": false, 00:10:32.730 "seek_data": false, 00:10:32.730 "copy": true, 00:10:32.730 "nvme_iov_md": false 00:10:32.730 }, 00:10:32.730 "memory_domains": [ 00:10:32.730 { 00:10:32.730 "dma_device_id": "system", 00:10:32.730 "dma_device_type": 1 00:10:32.730 }, 00:10:32.730 { 00:10:32.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.730 "dma_device_type": 2 00:10:32.730 } 00:10:32.730 ], 00:10:32.730 "driver_specific": {} 00:10:32.730 } 00:10:32.730 ] 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.730 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 BaseBdev3 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 [ 00:10:32.991 { 00:10:32.991 "name": "BaseBdev3", 00:10:32.991 "aliases": [ 00:10:32.991 "f25bdd69-9aee-4fab-985c-d47cc3ded077" 00:10:32.991 ], 00:10:32.991 "product_name": "Malloc disk", 00:10:32.991 "block_size": 512, 00:10:32.991 "num_blocks": 65536, 00:10:32.991 "uuid": "f25bdd69-9aee-4fab-985c-d47cc3ded077", 00:10:32.991 "assigned_rate_limits": { 00:10:32.991 "rw_ios_per_sec": 0, 00:10:32.991 "rw_mbytes_per_sec": 0, 00:10:32.991 "r_mbytes_per_sec": 0, 00:10:32.991 "w_mbytes_per_sec": 0 00:10:32.991 }, 00:10:32.991 "claimed": false, 00:10:32.991 "zoned": false, 00:10:32.991 "supported_io_types": { 00:10:32.991 "read": true, 00:10:32.991 "write": true, 00:10:32.991 "unmap": true, 00:10:32.991 "flush": true, 00:10:32.991 "reset": true, 00:10:32.991 "nvme_admin": false, 00:10:32.991 "nvme_io": false, 00:10:32.991 "nvme_io_md": false, 00:10:32.991 "write_zeroes": true, 00:10:32.991 "zcopy": true, 00:10:32.991 "get_zone_info": false, 00:10:32.991 "zone_management": false, 00:10:32.991 "zone_append": false, 00:10:32.991 "compare": false, 00:10:32.991 "compare_and_write": false, 00:10:32.991 "abort": true, 00:10:32.991 "seek_hole": false, 00:10:32.991 "seek_data": false, 00:10:32.991 "copy": true, 00:10:32.991 "nvme_iov_md": false 00:10:32.991 }, 00:10:32.991 "memory_domains": [ 00:10:32.991 { 00:10:32.991 "dma_device_id": "system", 00:10:32.991 "dma_device_type": 1 00:10:32.991 }, 00:10:32.991 { 00:10:32.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.991 "dma_device_type": 2 00:10:32.991 } 00:10:32.991 ], 00:10:32.991 "driver_specific": {} 00:10:32.991 } 00:10:32.991 ] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 BaseBdev4 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 [ 00:10:32.991 { 00:10:32.991 "name": "BaseBdev4", 00:10:32.991 "aliases": [ 00:10:32.991 "6cedfd94-9a46-4091-9040-60ee16410c7e" 00:10:32.991 ], 00:10:32.991 "product_name": "Malloc disk", 00:10:32.991 "block_size": 512, 00:10:32.991 "num_blocks": 65536, 00:10:32.991 "uuid": "6cedfd94-9a46-4091-9040-60ee16410c7e", 00:10:32.991 "assigned_rate_limits": { 00:10:32.991 "rw_ios_per_sec": 0, 00:10:32.991 "rw_mbytes_per_sec": 0, 00:10:32.991 "r_mbytes_per_sec": 0, 00:10:32.991 "w_mbytes_per_sec": 0 00:10:32.991 }, 00:10:32.991 "claimed": false, 00:10:32.991 "zoned": false, 00:10:32.991 "supported_io_types": { 00:10:32.991 "read": true, 00:10:32.991 "write": true, 00:10:32.991 "unmap": true, 00:10:32.991 "flush": true, 00:10:32.991 "reset": true, 00:10:32.991 "nvme_admin": false, 00:10:32.991 "nvme_io": false, 00:10:32.991 "nvme_io_md": false, 00:10:32.991 "write_zeroes": true, 00:10:32.991 "zcopy": true, 00:10:32.991 "get_zone_info": false, 00:10:32.991 "zone_management": false, 00:10:32.991 "zone_append": false, 00:10:32.991 "compare": false, 00:10:32.991 "compare_and_write": false, 00:10:32.991 "abort": true, 00:10:32.991 "seek_hole": false, 00:10:32.991 "seek_data": false, 00:10:32.991 "copy": true, 00:10:32.991 "nvme_iov_md": false 00:10:32.991 }, 00:10:32.991 "memory_domains": [ 00:10:32.991 { 00:10:32.991 "dma_device_id": "system", 00:10:32.991 "dma_device_type": 1 00:10:32.991 }, 00:10:32.991 { 00:10:32.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.991 "dma_device_type": 2 00:10:32.991 } 00:10:32.991 ], 00:10:32.991 "driver_specific": {} 00:10:32.991 } 00:10:32.991 ] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 [2024-11-26 12:53:50.510865] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.991 [2024-11-26 12:53:50.511351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.991 [2024-11-26 12:53:50.511384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.991 [2024-11-26 12:53:50.513163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.991 [2024-11-26 12:53:50.513213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.991 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.992 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.992 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.992 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.992 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.992 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.992 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.992 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.992 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.992 "name": "Existed_Raid", 00:10:32.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.992 "strip_size_kb": 0, 00:10:32.992 "state": "configuring", 00:10:32.992 "raid_level": "raid1", 00:10:32.992 "superblock": false, 00:10:32.992 "num_base_bdevs": 4, 00:10:32.992 "num_base_bdevs_discovered": 3, 00:10:32.992 "num_base_bdevs_operational": 4, 00:10:32.992 "base_bdevs_list": [ 00:10:32.992 { 00:10:32.992 "name": "BaseBdev1", 00:10:32.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.992 "is_configured": false, 00:10:32.992 "data_offset": 0, 00:10:32.992 "data_size": 0 00:10:32.992 }, 00:10:32.992 { 00:10:32.992 "name": "BaseBdev2", 00:10:32.992 "uuid": "660ed0f2-801c-4555-8223-cab0a1ffec4c", 00:10:32.992 "is_configured": true, 00:10:32.992 "data_offset": 0, 00:10:32.992 "data_size": 65536 00:10:32.992 }, 00:10:32.992 { 00:10:32.992 "name": "BaseBdev3", 00:10:32.992 "uuid": "f25bdd69-9aee-4fab-985c-d47cc3ded077", 00:10:32.992 "is_configured": true, 00:10:32.992 "data_offset": 0, 00:10:32.992 "data_size": 65536 00:10:32.992 }, 00:10:32.992 { 00:10:32.992 "name": "BaseBdev4", 00:10:32.992 "uuid": "6cedfd94-9a46-4091-9040-60ee16410c7e", 00:10:32.992 "is_configured": true, 00:10:32.992 "data_offset": 0, 00:10:32.992 "data_size": 65536 00:10:32.992 } 00:10:32.992 ] 00:10:32.992 }' 00:10:32.992 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.992 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.562 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:33.562 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.562 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.562 [2024-11-26 12:53:50.994061] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:33.562 12:53:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.562 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:33.562 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.562 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.562 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.562 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.562 12:53:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.562 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.562 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.562 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.562 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.562 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.562 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.562 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.562 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.562 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.562 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.562 "name": "Existed_Raid", 00:10:33.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.562 "strip_size_kb": 0, 00:10:33.562 "state": "configuring", 00:10:33.562 "raid_level": "raid1", 00:10:33.562 "superblock": false, 00:10:33.562 "num_base_bdevs": 4, 00:10:33.562 "num_base_bdevs_discovered": 2, 00:10:33.562 "num_base_bdevs_operational": 4, 00:10:33.562 "base_bdevs_list": [ 00:10:33.562 { 00:10:33.562 "name": "BaseBdev1", 00:10:33.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.562 "is_configured": false, 00:10:33.562 "data_offset": 0, 00:10:33.562 "data_size": 0 00:10:33.562 }, 00:10:33.562 { 00:10:33.562 "name": null, 00:10:33.562 "uuid": "660ed0f2-801c-4555-8223-cab0a1ffec4c", 00:10:33.562 "is_configured": false, 00:10:33.562 "data_offset": 0, 00:10:33.562 "data_size": 65536 00:10:33.562 }, 00:10:33.562 { 00:10:33.562 "name": "BaseBdev3", 00:10:33.562 "uuid": "f25bdd69-9aee-4fab-985c-d47cc3ded077", 00:10:33.562 "is_configured": true, 00:10:33.563 "data_offset": 0, 00:10:33.563 "data_size": 65536 00:10:33.563 }, 00:10:33.563 { 00:10:33.563 "name": "BaseBdev4", 00:10:33.563 "uuid": "6cedfd94-9a46-4091-9040-60ee16410c7e", 00:10:33.563 "is_configured": true, 00:10:33.563 "data_offset": 0, 00:10:33.563 "data_size": 65536 00:10:33.563 } 00:10:33.563 ] 00:10:33.563 }' 00:10:33.563 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.563 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.823 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:33.823 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.823 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.823 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.823 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.823 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:33.823 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:33.823 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.823 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.824 [2024-11-26 12:53:51.444226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.824 BaseBdev1 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.824 [ 00:10:33.824 { 00:10:33.824 "name": "BaseBdev1", 00:10:33.824 "aliases": [ 00:10:33.824 "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824" 00:10:33.824 ], 00:10:33.824 "product_name": "Malloc disk", 00:10:33.824 "block_size": 512, 00:10:33.824 "num_blocks": 65536, 00:10:33.824 "uuid": "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824", 00:10:33.824 "assigned_rate_limits": { 00:10:33.824 "rw_ios_per_sec": 0, 00:10:33.824 "rw_mbytes_per_sec": 0, 00:10:33.824 "r_mbytes_per_sec": 0, 00:10:33.824 "w_mbytes_per_sec": 0 00:10:33.824 }, 00:10:33.824 "claimed": true, 00:10:33.824 "claim_type": "exclusive_write", 00:10:33.824 "zoned": false, 00:10:33.824 "supported_io_types": { 00:10:33.824 "read": true, 00:10:33.824 "write": true, 00:10:33.824 "unmap": true, 00:10:33.824 "flush": true, 00:10:33.824 "reset": true, 00:10:33.824 "nvme_admin": false, 00:10:33.824 "nvme_io": false, 00:10:33.824 "nvme_io_md": false, 00:10:33.824 "write_zeroes": true, 00:10:33.824 "zcopy": true, 00:10:33.824 "get_zone_info": false, 00:10:33.824 "zone_management": false, 00:10:33.824 "zone_append": false, 00:10:33.824 "compare": false, 00:10:33.824 "compare_and_write": false, 00:10:33.824 "abort": true, 00:10:33.824 "seek_hole": false, 00:10:33.824 "seek_data": false, 00:10:33.824 "copy": true, 00:10:33.824 "nvme_iov_md": false 00:10:33.824 }, 00:10:33.824 "memory_domains": [ 00:10:33.824 { 00:10:33.824 "dma_device_id": "system", 00:10:33.824 "dma_device_type": 1 00:10:33.824 }, 00:10:33.824 { 00:10:33.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.824 "dma_device_type": 2 00:10:33.824 } 00:10:33.824 ], 00:10:33.824 "driver_specific": {} 00:10:33.824 } 00:10:33.824 ] 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.824 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.085 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.085 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.085 "name": "Existed_Raid", 00:10:34.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.085 "strip_size_kb": 0, 00:10:34.085 "state": "configuring", 00:10:34.085 "raid_level": "raid1", 00:10:34.085 "superblock": false, 00:10:34.085 "num_base_bdevs": 4, 00:10:34.085 "num_base_bdevs_discovered": 3, 00:10:34.085 "num_base_bdevs_operational": 4, 00:10:34.085 "base_bdevs_list": [ 00:10:34.085 { 00:10:34.085 "name": "BaseBdev1", 00:10:34.085 "uuid": "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824", 00:10:34.085 "is_configured": true, 00:10:34.085 "data_offset": 0, 00:10:34.085 "data_size": 65536 00:10:34.085 }, 00:10:34.085 { 00:10:34.085 "name": null, 00:10:34.085 "uuid": "660ed0f2-801c-4555-8223-cab0a1ffec4c", 00:10:34.085 "is_configured": false, 00:10:34.085 "data_offset": 0, 00:10:34.085 "data_size": 65536 00:10:34.085 }, 00:10:34.085 { 00:10:34.085 "name": "BaseBdev3", 00:10:34.085 "uuid": "f25bdd69-9aee-4fab-985c-d47cc3ded077", 00:10:34.085 "is_configured": true, 00:10:34.085 "data_offset": 0, 00:10:34.085 "data_size": 65536 00:10:34.085 }, 00:10:34.085 { 00:10:34.085 "name": "BaseBdev4", 00:10:34.085 "uuid": "6cedfd94-9a46-4091-9040-60ee16410c7e", 00:10:34.085 "is_configured": true, 00:10:34.085 "data_offset": 0, 00:10:34.085 "data_size": 65536 00:10:34.085 } 00:10:34.085 ] 00:10:34.085 }' 00:10:34.085 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.085 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.345 [2024-11-26 12:53:51.927436] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.345 "name": "Existed_Raid", 00:10:34.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.345 "strip_size_kb": 0, 00:10:34.345 "state": "configuring", 00:10:34.345 "raid_level": "raid1", 00:10:34.345 "superblock": false, 00:10:34.345 "num_base_bdevs": 4, 00:10:34.345 "num_base_bdevs_discovered": 2, 00:10:34.345 "num_base_bdevs_operational": 4, 00:10:34.345 "base_bdevs_list": [ 00:10:34.345 { 00:10:34.345 "name": "BaseBdev1", 00:10:34.345 "uuid": "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824", 00:10:34.345 "is_configured": true, 00:10:34.345 "data_offset": 0, 00:10:34.345 "data_size": 65536 00:10:34.345 }, 00:10:34.345 { 00:10:34.345 "name": null, 00:10:34.345 "uuid": "660ed0f2-801c-4555-8223-cab0a1ffec4c", 00:10:34.345 "is_configured": false, 00:10:34.345 "data_offset": 0, 00:10:34.345 "data_size": 65536 00:10:34.345 }, 00:10:34.345 { 00:10:34.345 "name": null, 00:10:34.345 "uuid": "f25bdd69-9aee-4fab-985c-d47cc3ded077", 00:10:34.345 "is_configured": false, 00:10:34.345 "data_offset": 0, 00:10:34.345 "data_size": 65536 00:10:34.345 }, 00:10:34.345 { 00:10:34.345 "name": "BaseBdev4", 00:10:34.345 "uuid": "6cedfd94-9a46-4091-9040-60ee16410c7e", 00:10:34.345 "is_configured": true, 00:10:34.345 "data_offset": 0, 00:10:34.345 "data_size": 65536 00:10:34.345 } 00:10:34.345 ] 00:10:34.345 }' 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.345 12:53:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 [2024-11-26 12:53:52.350768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.915 "name": "Existed_Raid", 00:10:34.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.915 "strip_size_kb": 0, 00:10:34.915 "state": "configuring", 00:10:34.915 "raid_level": "raid1", 00:10:34.915 "superblock": false, 00:10:34.915 "num_base_bdevs": 4, 00:10:34.915 "num_base_bdevs_discovered": 3, 00:10:34.915 "num_base_bdevs_operational": 4, 00:10:34.915 "base_bdevs_list": [ 00:10:34.915 { 00:10:34.915 "name": "BaseBdev1", 00:10:34.915 "uuid": "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824", 00:10:34.915 "is_configured": true, 00:10:34.915 "data_offset": 0, 00:10:34.915 "data_size": 65536 00:10:34.915 }, 00:10:34.915 { 00:10:34.915 "name": null, 00:10:34.915 "uuid": "660ed0f2-801c-4555-8223-cab0a1ffec4c", 00:10:34.915 "is_configured": false, 00:10:34.915 "data_offset": 0, 00:10:34.915 "data_size": 65536 00:10:34.915 }, 00:10:34.915 { 00:10:34.915 "name": "BaseBdev3", 00:10:34.915 "uuid": "f25bdd69-9aee-4fab-985c-d47cc3ded077", 00:10:34.915 "is_configured": true, 00:10:34.915 "data_offset": 0, 00:10:34.915 "data_size": 65536 00:10:34.915 }, 00:10:34.915 { 00:10:34.915 "name": "BaseBdev4", 00:10:34.915 "uuid": "6cedfd94-9a46-4091-9040-60ee16410c7e", 00:10:34.915 "is_configured": true, 00:10:34.915 "data_offset": 0, 00:10:34.915 "data_size": 65536 00:10:34.915 } 00:10:34.915 ] 00:10:34.915 }' 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.915 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.177 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.178 [2024-11-26 12:53:52.833990] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.178 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.453 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.453 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.453 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.453 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.453 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.453 "name": "Existed_Raid", 00:10:35.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.453 "strip_size_kb": 0, 00:10:35.453 "state": "configuring", 00:10:35.453 "raid_level": "raid1", 00:10:35.453 "superblock": false, 00:10:35.453 "num_base_bdevs": 4, 00:10:35.453 "num_base_bdevs_discovered": 2, 00:10:35.453 "num_base_bdevs_operational": 4, 00:10:35.453 "base_bdevs_list": [ 00:10:35.453 { 00:10:35.453 "name": null, 00:10:35.453 "uuid": "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824", 00:10:35.453 "is_configured": false, 00:10:35.453 "data_offset": 0, 00:10:35.453 "data_size": 65536 00:10:35.453 }, 00:10:35.453 { 00:10:35.453 "name": null, 00:10:35.453 "uuid": "660ed0f2-801c-4555-8223-cab0a1ffec4c", 00:10:35.453 "is_configured": false, 00:10:35.453 "data_offset": 0, 00:10:35.453 "data_size": 65536 00:10:35.453 }, 00:10:35.453 { 00:10:35.453 "name": "BaseBdev3", 00:10:35.453 "uuid": "f25bdd69-9aee-4fab-985c-d47cc3ded077", 00:10:35.453 "is_configured": true, 00:10:35.453 "data_offset": 0, 00:10:35.453 "data_size": 65536 00:10:35.453 }, 00:10:35.453 { 00:10:35.453 "name": "BaseBdev4", 00:10:35.453 "uuid": "6cedfd94-9a46-4091-9040-60ee16410c7e", 00:10:35.453 "is_configured": true, 00:10:35.453 "data_offset": 0, 00:10:35.453 "data_size": 65536 00:10:35.453 } 00:10:35.453 ] 00:10:35.453 }' 00:10:35.453 12:53:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.453 12:53:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.729 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.729 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.730 [2024-11-26 12:53:53.359397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.730 "name": "Existed_Raid", 00:10:35.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.730 "strip_size_kb": 0, 00:10:35.730 "state": "configuring", 00:10:35.730 "raid_level": "raid1", 00:10:35.730 "superblock": false, 00:10:35.730 "num_base_bdevs": 4, 00:10:35.730 "num_base_bdevs_discovered": 3, 00:10:35.730 "num_base_bdevs_operational": 4, 00:10:35.730 "base_bdevs_list": [ 00:10:35.730 { 00:10:35.730 "name": null, 00:10:35.730 "uuid": "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824", 00:10:35.730 "is_configured": false, 00:10:35.730 "data_offset": 0, 00:10:35.730 "data_size": 65536 00:10:35.730 }, 00:10:35.730 { 00:10:35.730 "name": "BaseBdev2", 00:10:35.730 "uuid": "660ed0f2-801c-4555-8223-cab0a1ffec4c", 00:10:35.730 "is_configured": true, 00:10:35.730 "data_offset": 0, 00:10:35.730 "data_size": 65536 00:10:35.730 }, 00:10:35.730 { 00:10:35.730 "name": "BaseBdev3", 00:10:35.730 "uuid": "f25bdd69-9aee-4fab-985c-d47cc3ded077", 00:10:35.730 "is_configured": true, 00:10:35.730 "data_offset": 0, 00:10:35.730 "data_size": 65536 00:10:35.730 }, 00:10:35.730 { 00:10:35.730 "name": "BaseBdev4", 00:10:35.730 "uuid": "6cedfd94-9a46-4091-9040-60ee16410c7e", 00:10:35.730 "is_configured": true, 00:10:35.730 "data_offset": 0, 00:10:35.730 "data_size": 65536 00:10:35.730 } 00:10:35.730 ] 00:10:35.730 }' 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.730 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.299 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.299 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.299 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.299 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.299 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.299 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.300 [2024-11-26 12:53:53.853705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:36.300 [2024-11-26 12:53:53.853839] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:36.300 [2024-11-26 12:53:53.853869] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:36.300 [2024-11-26 12:53:53.854131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:36.300 [2024-11-26 12:53:53.854323] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:36.300 [2024-11-26 12:53:53.854365] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:36.300 [2024-11-26 12:53:53.854578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.300 NewBaseBdev 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.300 [ 00:10:36.300 { 00:10:36.300 "name": "NewBaseBdev", 00:10:36.300 "aliases": [ 00:10:36.300 "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824" 00:10:36.300 ], 00:10:36.300 "product_name": "Malloc disk", 00:10:36.300 "block_size": 512, 00:10:36.300 "num_blocks": 65536, 00:10:36.300 "uuid": "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824", 00:10:36.300 "assigned_rate_limits": { 00:10:36.300 "rw_ios_per_sec": 0, 00:10:36.300 "rw_mbytes_per_sec": 0, 00:10:36.300 "r_mbytes_per_sec": 0, 00:10:36.300 "w_mbytes_per_sec": 0 00:10:36.300 }, 00:10:36.300 "claimed": true, 00:10:36.300 "claim_type": "exclusive_write", 00:10:36.300 "zoned": false, 00:10:36.300 "supported_io_types": { 00:10:36.300 "read": true, 00:10:36.300 "write": true, 00:10:36.300 "unmap": true, 00:10:36.300 "flush": true, 00:10:36.300 "reset": true, 00:10:36.300 "nvme_admin": false, 00:10:36.300 "nvme_io": false, 00:10:36.300 "nvme_io_md": false, 00:10:36.300 "write_zeroes": true, 00:10:36.300 "zcopy": true, 00:10:36.300 "get_zone_info": false, 00:10:36.300 "zone_management": false, 00:10:36.300 "zone_append": false, 00:10:36.300 "compare": false, 00:10:36.300 "compare_and_write": false, 00:10:36.300 "abort": true, 00:10:36.300 "seek_hole": false, 00:10:36.300 "seek_data": false, 00:10:36.300 "copy": true, 00:10:36.300 "nvme_iov_md": false 00:10:36.300 }, 00:10:36.300 "memory_domains": [ 00:10:36.300 { 00:10:36.300 "dma_device_id": "system", 00:10:36.300 "dma_device_type": 1 00:10:36.300 }, 00:10:36.300 { 00:10:36.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.300 "dma_device_type": 2 00:10:36.300 } 00:10:36.300 ], 00:10:36.300 "driver_specific": {} 00:10:36.300 } 00:10:36.300 ] 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.300 "name": "Existed_Raid", 00:10:36.300 "uuid": "42154fc2-502e-4476-a90e-f17f7b16b6e5", 00:10:36.300 "strip_size_kb": 0, 00:10:36.300 "state": "online", 00:10:36.300 "raid_level": "raid1", 00:10:36.300 "superblock": false, 00:10:36.300 "num_base_bdevs": 4, 00:10:36.300 "num_base_bdevs_discovered": 4, 00:10:36.300 "num_base_bdevs_operational": 4, 00:10:36.300 "base_bdevs_list": [ 00:10:36.300 { 00:10:36.300 "name": "NewBaseBdev", 00:10:36.300 "uuid": "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824", 00:10:36.300 "is_configured": true, 00:10:36.300 "data_offset": 0, 00:10:36.300 "data_size": 65536 00:10:36.300 }, 00:10:36.300 { 00:10:36.300 "name": "BaseBdev2", 00:10:36.300 "uuid": "660ed0f2-801c-4555-8223-cab0a1ffec4c", 00:10:36.300 "is_configured": true, 00:10:36.300 "data_offset": 0, 00:10:36.300 "data_size": 65536 00:10:36.300 }, 00:10:36.300 { 00:10:36.300 "name": "BaseBdev3", 00:10:36.300 "uuid": "f25bdd69-9aee-4fab-985c-d47cc3ded077", 00:10:36.300 "is_configured": true, 00:10:36.300 "data_offset": 0, 00:10:36.300 "data_size": 65536 00:10:36.300 }, 00:10:36.300 { 00:10:36.300 "name": "BaseBdev4", 00:10:36.300 "uuid": "6cedfd94-9a46-4091-9040-60ee16410c7e", 00:10:36.300 "is_configured": true, 00:10:36.300 "data_offset": 0, 00:10:36.300 "data_size": 65536 00:10:36.300 } 00:10:36.300 ] 00:10:36.300 }' 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.300 12:53:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.868 [2024-11-26 12:53:54.313278] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.868 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.868 "name": "Existed_Raid", 00:10:36.868 "aliases": [ 00:10:36.868 "42154fc2-502e-4476-a90e-f17f7b16b6e5" 00:10:36.868 ], 00:10:36.868 "product_name": "Raid Volume", 00:10:36.868 "block_size": 512, 00:10:36.868 "num_blocks": 65536, 00:10:36.868 "uuid": "42154fc2-502e-4476-a90e-f17f7b16b6e5", 00:10:36.868 "assigned_rate_limits": { 00:10:36.868 "rw_ios_per_sec": 0, 00:10:36.868 "rw_mbytes_per_sec": 0, 00:10:36.868 "r_mbytes_per_sec": 0, 00:10:36.868 "w_mbytes_per_sec": 0 00:10:36.868 }, 00:10:36.868 "claimed": false, 00:10:36.868 "zoned": false, 00:10:36.868 "supported_io_types": { 00:10:36.868 "read": true, 00:10:36.868 "write": true, 00:10:36.868 "unmap": false, 00:10:36.868 "flush": false, 00:10:36.868 "reset": true, 00:10:36.868 "nvme_admin": false, 00:10:36.868 "nvme_io": false, 00:10:36.868 "nvme_io_md": false, 00:10:36.868 "write_zeroes": true, 00:10:36.868 "zcopy": false, 00:10:36.868 "get_zone_info": false, 00:10:36.868 "zone_management": false, 00:10:36.868 "zone_append": false, 00:10:36.868 "compare": false, 00:10:36.868 "compare_and_write": false, 00:10:36.868 "abort": false, 00:10:36.868 "seek_hole": false, 00:10:36.868 "seek_data": false, 00:10:36.868 "copy": false, 00:10:36.868 "nvme_iov_md": false 00:10:36.868 }, 00:10:36.868 "memory_domains": [ 00:10:36.868 { 00:10:36.868 "dma_device_id": "system", 00:10:36.868 "dma_device_type": 1 00:10:36.868 }, 00:10:36.868 { 00:10:36.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.868 "dma_device_type": 2 00:10:36.868 }, 00:10:36.868 { 00:10:36.868 "dma_device_id": "system", 00:10:36.868 "dma_device_type": 1 00:10:36.868 }, 00:10:36.868 { 00:10:36.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.868 "dma_device_type": 2 00:10:36.868 }, 00:10:36.868 { 00:10:36.868 "dma_device_id": "system", 00:10:36.868 "dma_device_type": 1 00:10:36.868 }, 00:10:36.868 { 00:10:36.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.868 "dma_device_type": 2 00:10:36.868 }, 00:10:36.868 { 00:10:36.868 "dma_device_id": "system", 00:10:36.868 "dma_device_type": 1 00:10:36.868 }, 00:10:36.868 { 00:10:36.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.868 "dma_device_type": 2 00:10:36.868 } 00:10:36.868 ], 00:10:36.868 "driver_specific": { 00:10:36.868 "raid": { 00:10:36.868 "uuid": "42154fc2-502e-4476-a90e-f17f7b16b6e5", 00:10:36.868 "strip_size_kb": 0, 00:10:36.868 "state": "online", 00:10:36.868 "raid_level": "raid1", 00:10:36.868 "superblock": false, 00:10:36.868 "num_base_bdevs": 4, 00:10:36.868 "num_base_bdevs_discovered": 4, 00:10:36.868 "num_base_bdevs_operational": 4, 00:10:36.868 "base_bdevs_list": [ 00:10:36.868 { 00:10:36.868 "name": "NewBaseBdev", 00:10:36.868 "uuid": "cd0bb54e-8fe6-4c6c-ba8a-b4ac31072824", 00:10:36.868 "is_configured": true, 00:10:36.868 "data_offset": 0, 00:10:36.869 "data_size": 65536 00:10:36.869 }, 00:10:36.869 { 00:10:36.869 "name": "BaseBdev2", 00:10:36.869 "uuid": "660ed0f2-801c-4555-8223-cab0a1ffec4c", 00:10:36.869 "is_configured": true, 00:10:36.869 "data_offset": 0, 00:10:36.869 "data_size": 65536 00:10:36.869 }, 00:10:36.869 { 00:10:36.869 "name": "BaseBdev3", 00:10:36.869 "uuid": "f25bdd69-9aee-4fab-985c-d47cc3ded077", 00:10:36.869 "is_configured": true, 00:10:36.869 "data_offset": 0, 00:10:36.869 "data_size": 65536 00:10:36.869 }, 00:10:36.869 { 00:10:36.869 "name": "BaseBdev4", 00:10:36.869 "uuid": "6cedfd94-9a46-4091-9040-60ee16410c7e", 00:10:36.869 "is_configured": true, 00:10:36.869 "data_offset": 0, 00:10:36.869 "data_size": 65536 00:10:36.869 } 00:10:36.869 ] 00:10:36.869 } 00:10:36.869 } 00:10:36.869 }' 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:36.869 BaseBdev2 00:10:36.869 BaseBdev3 00:10:36.869 BaseBdev4' 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.869 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.129 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.129 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.129 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.129 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.129 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:37.129 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.129 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.129 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.129 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.130 [2024-11-26 12:53:54.616451] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.130 [2024-11-26 12:53:54.616521] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.130 [2024-11-26 12:53:54.616618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.130 [2024-11-26 12:53:54.616872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.130 [2024-11-26 12:53:54.616889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84195 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 84195 ']' 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 84195 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84195 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.130 killing process with pid 84195 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84195' 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 84195 00:10:37.130 [2024-11-26 12:53:54.666621] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.130 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 84195 00:10:37.130 [2024-11-26 12:53:54.706189] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.391 12:53:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:37.391 00:10:37.391 real 0m9.386s 00:10:37.391 user 0m15.997s 00:10:37.391 sys 0m1.998s 00:10:37.391 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.391 12:53:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.391 ************************************ 00:10:37.391 END TEST raid_state_function_test 00:10:37.391 ************************************ 00:10:37.391 12:53:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:37.391 12:53:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:37.391 12:53:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.391 12:53:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.391 ************************************ 00:10:37.391 START TEST raid_state_function_test_sb 00:10:37.391 ************************************ 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84839 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84839' 00:10:37.391 Process raid pid: 84839 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84839 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84839 ']' 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.391 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.652 [2024-11-26 12:53:55.120677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:37.652 [2024-11-26 12:53:55.121330] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.652 [2024-11-26 12:53:55.280966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.652 [2024-11-26 12:53:55.325540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.912 [2024-11-26 12:53:55.368168] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.912 [2024-11-26 12:53:55.368303] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.481 [2024-11-26 12:53:55.957456] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.481 [2024-11-26 12:53:55.957727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.481 [2024-11-26 12:53:55.957746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.481 [2024-11-26 12:53:55.957759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.481 [2024-11-26 12:53:55.957768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.481 [2024-11-26 12:53:55.957781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.481 [2024-11-26 12:53:55.957787] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.481 [2024-11-26 12:53:55.957796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.481 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.481 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.481 "name": "Existed_Raid", 00:10:38.481 "uuid": "7265e998-51b0-43b8-b1a3-3a24dc74329b", 00:10:38.481 "strip_size_kb": 0, 00:10:38.481 "state": "configuring", 00:10:38.481 "raid_level": "raid1", 00:10:38.481 "superblock": true, 00:10:38.481 "num_base_bdevs": 4, 00:10:38.481 "num_base_bdevs_discovered": 0, 00:10:38.481 "num_base_bdevs_operational": 4, 00:10:38.481 "base_bdevs_list": [ 00:10:38.481 { 00:10:38.481 "name": "BaseBdev1", 00:10:38.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.481 "is_configured": false, 00:10:38.481 "data_offset": 0, 00:10:38.481 "data_size": 0 00:10:38.481 }, 00:10:38.481 { 00:10:38.481 "name": "BaseBdev2", 00:10:38.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.481 "is_configured": false, 00:10:38.481 "data_offset": 0, 00:10:38.481 "data_size": 0 00:10:38.481 }, 00:10:38.481 { 00:10:38.481 "name": "BaseBdev3", 00:10:38.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.481 "is_configured": false, 00:10:38.481 "data_offset": 0, 00:10:38.481 "data_size": 0 00:10:38.481 }, 00:10:38.481 { 00:10:38.481 "name": "BaseBdev4", 00:10:38.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.481 "is_configured": false, 00:10:38.481 "data_offset": 0, 00:10:38.481 "data_size": 0 00:10:38.481 } 00:10:38.481 ] 00:10:38.481 }' 00:10:38.481 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.481 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.742 [2024-11-26 12:53:56.368739] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.742 [2024-11-26 12:53:56.368832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.742 [2024-11-26 12:53:56.376761] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.742 [2024-11-26 12:53:56.377091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.742 [2024-11-26 12:53:56.377143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.742 [2024-11-26 12:53:56.377240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.742 [2024-11-26 12:53:56.377281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.742 [2024-11-26 12:53:56.377344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.742 [2024-11-26 12:53:56.377373] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.742 [2024-11-26 12:53:56.377434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.742 [2024-11-26 12:53:56.393557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.742 BaseBdev1 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.742 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.002 [ 00:10:39.002 { 00:10:39.002 "name": "BaseBdev1", 00:10:39.002 "aliases": [ 00:10:39.002 "db06780e-c8e9-44e9-9ad0-4f31ddc80d50" 00:10:39.002 ], 00:10:39.002 "product_name": "Malloc disk", 00:10:39.002 "block_size": 512, 00:10:39.002 "num_blocks": 65536, 00:10:39.002 "uuid": "db06780e-c8e9-44e9-9ad0-4f31ddc80d50", 00:10:39.002 "assigned_rate_limits": { 00:10:39.002 "rw_ios_per_sec": 0, 00:10:39.002 "rw_mbytes_per_sec": 0, 00:10:39.002 "r_mbytes_per_sec": 0, 00:10:39.002 "w_mbytes_per_sec": 0 00:10:39.002 }, 00:10:39.002 "claimed": true, 00:10:39.002 "claim_type": "exclusive_write", 00:10:39.002 "zoned": false, 00:10:39.002 "supported_io_types": { 00:10:39.002 "read": true, 00:10:39.002 "write": true, 00:10:39.002 "unmap": true, 00:10:39.002 "flush": true, 00:10:39.002 "reset": true, 00:10:39.002 "nvme_admin": false, 00:10:39.002 "nvme_io": false, 00:10:39.002 "nvme_io_md": false, 00:10:39.002 "write_zeroes": true, 00:10:39.002 "zcopy": true, 00:10:39.002 "get_zone_info": false, 00:10:39.002 "zone_management": false, 00:10:39.002 "zone_append": false, 00:10:39.002 "compare": false, 00:10:39.002 "compare_and_write": false, 00:10:39.002 "abort": true, 00:10:39.002 "seek_hole": false, 00:10:39.002 "seek_data": false, 00:10:39.002 "copy": true, 00:10:39.002 "nvme_iov_md": false 00:10:39.002 }, 00:10:39.002 "memory_domains": [ 00:10:39.002 { 00:10:39.002 "dma_device_id": "system", 00:10:39.002 "dma_device_type": 1 00:10:39.002 }, 00:10:39.002 { 00:10:39.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.002 "dma_device_type": 2 00:10:39.002 } 00:10:39.002 ], 00:10:39.002 "driver_specific": {} 00:10:39.002 } 00:10:39.002 ] 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.002 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.003 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.003 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.003 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.003 "name": "Existed_Raid", 00:10:39.003 "uuid": "38da10c5-1097-4c92-a41f-b90459ec436a", 00:10:39.003 "strip_size_kb": 0, 00:10:39.003 "state": "configuring", 00:10:39.003 "raid_level": "raid1", 00:10:39.003 "superblock": true, 00:10:39.003 "num_base_bdevs": 4, 00:10:39.003 "num_base_bdevs_discovered": 1, 00:10:39.003 "num_base_bdevs_operational": 4, 00:10:39.003 "base_bdevs_list": [ 00:10:39.003 { 00:10:39.003 "name": "BaseBdev1", 00:10:39.003 "uuid": "db06780e-c8e9-44e9-9ad0-4f31ddc80d50", 00:10:39.003 "is_configured": true, 00:10:39.003 "data_offset": 2048, 00:10:39.003 "data_size": 63488 00:10:39.003 }, 00:10:39.003 { 00:10:39.003 "name": "BaseBdev2", 00:10:39.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.003 "is_configured": false, 00:10:39.003 "data_offset": 0, 00:10:39.003 "data_size": 0 00:10:39.003 }, 00:10:39.003 { 00:10:39.003 "name": "BaseBdev3", 00:10:39.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.003 "is_configured": false, 00:10:39.003 "data_offset": 0, 00:10:39.003 "data_size": 0 00:10:39.003 }, 00:10:39.003 { 00:10:39.003 "name": "BaseBdev4", 00:10:39.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.003 "is_configured": false, 00:10:39.003 "data_offset": 0, 00:10:39.003 "data_size": 0 00:10:39.003 } 00:10:39.003 ] 00:10:39.003 }' 00:10:39.003 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.003 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.263 [2024-11-26 12:53:56.876751] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.263 [2024-11-26 12:53:56.876860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.263 [2024-11-26 12:53:56.888797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.263 [2024-11-26 12:53:56.890551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.263 [2024-11-26 12:53:56.890744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.263 [2024-11-26 12:53:56.890759] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.263 [2024-11-26 12:53:56.890804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.263 [2024-11-26 12:53:56.890812] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:39.263 [2024-11-26 12:53:56.890850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.263 "name": "Existed_Raid", 00:10:39.263 "uuid": "96fceb23-7d73-488e-a440-6349b3fa776d", 00:10:39.263 "strip_size_kb": 0, 00:10:39.263 "state": "configuring", 00:10:39.263 "raid_level": "raid1", 00:10:39.263 "superblock": true, 00:10:39.263 "num_base_bdevs": 4, 00:10:39.263 "num_base_bdevs_discovered": 1, 00:10:39.263 "num_base_bdevs_operational": 4, 00:10:39.263 "base_bdevs_list": [ 00:10:39.263 { 00:10:39.263 "name": "BaseBdev1", 00:10:39.263 "uuid": "db06780e-c8e9-44e9-9ad0-4f31ddc80d50", 00:10:39.263 "is_configured": true, 00:10:39.263 "data_offset": 2048, 00:10:39.263 "data_size": 63488 00:10:39.263 }, 00:10:39.263 { 00:10:39.263 "name": "BaseBdev2", 00:10:39.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.263 "is_configured": false, 00:10:39.263 "data_offset": 0, 00:10:39.263 "data_size": 0 00:10:39.263 }, 00:10:39.263 { 00:10:39.263 "name": "BaseBdev3", 00:10:39.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.263 "is_configured": false, 00:10:39.263 "data_offset": 0, 00:10:39.263 "data_size": 0 00:10:39.263 }, 00:10:39.263 { 00:10:39.263 "name": "BaseBdev4", 00:10:39.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.263 "is_configured": false, 00:10:39.263 "data_offset": 0, 00:10:39.263 "data_size": 0 00:10:39.263 } 00:10:39.263 ] 00:10:39.263 }' 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.263 12:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.834 [2024-11-26 12:53:57.312314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.834 BaseBdev2 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.834 [ 00:10:39.834 { 00:10:39.834 "name": "BaseBdev2", 00:10:39.834 "aliases": [ 00:10:39.834 "1e01c423-1e82-4118-b895-77d85c43d944" 00:10:39.834 ], 00:10:39.834 "product_name": "Malloc disk", 00:10:39.834 "block_size": 512, 00:10:39.834 "num_blocks": 65536, 00:10:39.834 "uuid": "1e01c423-1e82-4118-b895-77d85c43d944", 00:10:39.834 "assigned_rate_limits": { 00:10:39.834 "rw_ios_per_sec": 0, 00:10:39.834 "rw_mbytes_per_sec": 0, 00:10:39.834 "r_mbytes_per_sec": 0, 00:10:39.834 "w_mbytes_per_sec": 0 00:10:39.834 }, 00:10:39.834 "claimed": true, 00:10:39.834 "claim_type": "exclusive_write", 00:10:39.834 "zoned": false, 00:10:39.834 "supported_io_types": { 00:10:39.834 "read": true, 00:10:39.834 "write": true, 00:10:39.834 "unmap": true, 00:10:39.834 "flush": true, 00:10:39.834 "reset": true, 00:10:39.834 "nvme_admin": false, 00:10:39.834 "nvme_io": false, 00:10:39.834 "nvme_io_md": false, 00:10:39.834 "write_zeroes": true, 00:10:39.834 "zcopy": true, 00:10:39.834 "get_zone_info": false, 00:10:39.834 "zone_management": false, 00:10:39.834 "zone_append": false, 00:10:39.834 "compare": false, 00:10:39.834 "compare_and_write": false, 00:10:39.834 "abort": true, 00:10:39.834 "seek_hole": false, 00:10:39.834 "seek_data": false, 00:10:39.834 "copy": true, 00:10:39.834 "nvme_iov_md": false 00:10:39.834 }, 00:10:39.834 "memory_domains": [ 00:10:39.834 { 00:10:39.834 "dma_device_id": "system", 00:10:39.834 "dma_device_type": 1 00:10:39.834 }, 00:10:39.834 { 00:10:39.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.834 "dma_device_type": 2 00:10:39.834 } 00:10:39.834 ], 00:10:39.834 "driver_specific": {} 00:10:39.834 } 00:10:39.834 ] 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.834 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.834 "name": "Existed_Raid", 00:10:39.834 "uuid": "96fceb23-7d73-488e-a440-6349b3fa776d", 00:10:39.834 "strip_size_kb": 0, 00:10:39.834 "state": "configuring", 00:10:39.834 "raid_level": "raid1", 00:10:39.834 "superblock": true, 00:10:39.834 "num_base_bdevs": 4, 00:10:39.834 "num_base_bdevs_discovered": 2, 00:10:39.834 "num_base_bdevs_operational": 4, 00:10:39.834 "base_bdevs_list": [ 00:10:39.834 { 00:10:39.834 "name": "BaseBdev1", 00:10:39.834 "uuid": "db06780e-c8e9-44e9-9ad0-4f31ddc80d50", 00:10:39.834 "is_configured": true, 00:10:39.834 "data_offset": 2048, 00:10:39.834 "data_size": 63488 00:10:39.834 }, 00:10:39.834 { 00:10:39.834 "name": "BaseBdev2", 00:10:39.834 "uuid": "1e01c423-1e82-4118-b895-77d85c43d944", 00:10:39.834 "is_configured": true, 00:10:39.834 "data_offset": 2048, 00:10:39.834 "data_size": 63488 00:10:39.834 }, 00:10:39.834 { 00:10:39.834 "name": "BaseBdev3", 00:10:39.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.835 "is_configured": false, 00:10:39.835 "data_offset": 0, 00:10:39.835 "data_size": 0 00:10:39.835 }, 00:10:39.835 { 00:10:39.835 "name": "BaseBdev4", 00:10:39.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.835 "is_configured": false, 00:10:39.835 "data_offset": 0, 00:10:39.835 "data_size": 0 00:10:39.835 } 00:10:39.835 ] 00:10:39.835 }' 00:10:39.835 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.835 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.404 [2024-11-26 12:53:57.794365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.404 BaseBdev3 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.404 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.404 [ 00:10:40.404 { 00:10:40.404 "name": "BaseBdev3", 00:10:40.404 "aliases": [ 00:10:40.404 "d7bf13db-ef8d-4c04-9507-dd3d58bfdde2" 00:10:40.404 ], 00:10:40.404 "product_name": "Malloc disk", 00:10:40.404 "block_size": 512, 00:10:40.404 "num_blocks": 65536, 00:10:40.404 "uuid": "d7bf13db-ef8d-4c04-9507-dd3d58bfdde2", 00:10:40.404 "assigned_rate_limits": { 00:10:40.404 "rw_ios_per_sec": 0, 00:10:40.404 "rw_mbytes_per_sec": 0, 00:10:40.404 "r_mbytes_per_sec": 0, 00:10:40.404 "w_mbytes_per_sec": 0 00:10:40.404 }, 00:10:40.404 "claimed": true, 00:10:40.404 "claim_type": "exclusive_write", 00:10:40.404 "zoned": false, 00:10:40.404 "supported_io_types": { 00:10:40.404 "read": true, 00:10:40.404 "write": true, 00:10:40.404 "unmap": true, 00:10:40.404 "flush": true, 00:10:40.404 "reset": true, 00:10:40.404 "nvme_admin": false, 00:10:40.404 "nvme_io": false, 00:10:40.404 "nvme_io_md": false, 00:10:40.404 "write_zeroes": true, 00:10:40.404 "zcopy": true, 00:10:40.404 "get_zone_info": false, 00:10:40.404 "zone_management": false, 00:10:40.404 "zone_append": false, 00:10:40.404 "compare": false, 00:10:40.404 "compare_and_write": false, 00:10:40.404 "abort": true, 00:10:40.404 "seek_hole": false, 00:10:40.404 "seek_data": false, 00:10:40.404 "copy": true, 00:10:40.404 "nvme_iov_md": false 00:10:40.404 }, 00:10:40.404 "memory_domains": [ 00:10:40.404 { 00:10:40.404 "dma_device_id": "system", 00:10:40.404 "dma_device_type": 1 00:10:40.404 }, 00:10:40.404 { 00:10:40.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.404 "dma_device_type": 2 00:10:40.405 } 00:10:40.405 ], 00:10:40.405 "driver_specific": {} 00:10:40.405 } 00:10:40.405 ] 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.405 "name": "Existed_Raid", 00:10:40.405 "uuid": "96fceb23-7d73-488e-a440-6349b3fa776d", 00:10:40.405 "strip_size_kb": 0, 00:10:40.405 "state": "configuring", 00:10:40.405 "raid_level": "raid1", 00:10:40.405 "superblock": true, 00:10:40.405 "num_base_bdevs": 4, 00:10:40.405 "num_base_bdevs_discovered": 3, 00:10:40.405 "num_base_bdevs_operational": 4, 00:10:40.405 "base_bdevs_list": [ 00:10:40.405 { 00:10:40.405 "name": "BaseBdev1", 00:10:40.405 "uuid": "db06780e-c8e9-44e9-9ad0-4f31ddc80d50", 00:10:40.405 "is_configured": true, 00:10:40.405 "data_offset": 2048, 00:10:40.405 "data_size": 63488 00:10:40.405 }, 00:10:40.405 { 00:10:40.405 "name": "BaseBdev2", 00:10:40.405 "uuid": "1e01c423-1e82-4118-b895-77d85c43d944", 00:10:40.405 "is_configured": true, 00:10:40.405 "data_offset": 2048, 00:10:40.405 "data_size": 63488 00:10:40.405 }, 00:10:40.405 { 00:10:40.405 "name": "BaseBdev3", 00:10:40.405 "uuid": "d7bf13db-ef8d-4c04-9507-dd3d58bfdde2", 00:10:40.405 "is_configured": true, 00:10:40.405 "data_offset": 2048, 00:10:40.405 "data_size": 63488 00:10:40.405 }, 00:10:40.405 { 00:10:40.405 "name": "BaseBdev4", 00:10:40.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.405 "is_configured": false, 00:10:40.405 "data_offset": 0, 00:10:40.405 "data_size": 0 00:10:40.405 } 00:10:40.405 ] 00:10:40.405 }' 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.405 12:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.665 [2024-11-26 12:53:58.296456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:40.665 [2024-11-26 12:53:58.296748] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:40.665 [2024-11-26 12:53:58.296798] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.665 BaseBdev4 00:10:40.665 [2024-11-26 12:53:58.297126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:40.665 [2024-11-26 12:53:58.297322] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:40.665 [2024-11-26 12:53:58.297375] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:40.665 [2024-11-26 12:53:58.297544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.665 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.665 [ 00:10:40.665 { 00:10:40.665 "name": "BaseBdev4", 00:10:40.665 "aliases": [ 00:10:40.665 "0139925c-cdd8-4942-bd9a-b13e2975545b" 00:10:40.665 ], 00:10:40.665 "product_name": "Malloc disk", 00:10:40.665 "block_size": 512, 00:10:40.665 "num_blocks": 65536, 00:10:40.665 "uuid": "0139925c-cdd8-4942-bd9a-b13e2975545b", 00:10:40.665 "assigned_rate_limits": { 00:10:40.665 "rw_ios_per_sec": 0, 00:10:40.665 "rw_mbytes_per_sec": 0, 00:10:40.665 "r_mbytes_per_sec": 0, 00:10:40.665 "w_mbytes_per_sec": 0 00:10:40.665 }, 00:10:40.665 "claimed": true, 00:10:40.665 "claim_type": "exclusive_write", 00:10:40.665 "zoned": false, 00:10:40.665 "supported_io_types": { 00:10:40.665 "read": true, 00:10:40.665 "write": true, 00:10:40.665 "unmap": true, 00:10:40.665 "flush": true, 00:10:40.665 "reset": true, 00:10:40.665 "nvme_admin": false, 00:10:40.665 "nvme_io": false, 00:10:40.665 "nvme_io_md": false, 00:10:40.665 "write_zeroes": true, 00:10:40.665 "zcopy": true, 00:10:40.665 "get_zone_info": false, 00:10:40.665 "zone_management": false, 00:10:40.665 "zone_append": false, 00:10:40.665 "compare": false, 00:10:40.665 "compare_and_write": false, 00:10:40.665 "abort": true, 00:10:40.665 "seek_hole": false, 00:10:40.665 "seek_data": false, 00:10:40.665 "copy": true, 00:10:40.665 "nvme_iov_md": false 00:10:40.665 }, 00:10:40.665 "memory_domains": [ 00:10:40.666 { 00:10:40.666 "dma_device_id": "system", 00:10:40.666 "dma_device_type": 1 00:10:40.666 }, 00:10:40.666 { 00:10:40.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.666 "dma_device_type": 2 00:10:40.666 } 00:10:40.666 ], 00:10:40.666 "driver_specific": {} 00:10:40.666 } 00:10:40.666 ] 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.666 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.925 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.925 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.925 "name": "Existed_Raid", 00:10:40.925 "uuid": "96fceb23-7d73-488e-a440-6349b3fa776d", 00:10:40.925 "strip_size_kb": 0, 00:10:40.925 "state": "online", 00:10:40.925 "raid_level": "raid1", 00:10:40.925 "superblock": true, 00:10:40.925 "num_base_bdevs": 4, 00:10:40.925 "num_base_bdevs_discovered": 4, 00:10:40.925 "num_base_bdevs_operational": 4, 00:10:40.925 "base_bdevs_list": [ 00:10:40.925 { 00:10:40.925 "name": "BaseBdev1", 00:10:40.925 "uuid": "db06780e-c8e9-44e9-9ad0-4f31ddc80d50", 00:10:40.925 "is_configured": true, 00:10:40.925 "data_offset": 2048, 00:10:40.925 "data_size": 63488 00:10:40.925 }, 00:10:40.925 { 00:10:40.925 "name": "BaseBdev2", 00:10:40.925 "uuid": "1e01c423-1e82-4118-b895-77d85c43d944", 00:10:40.925 "is_configured": true, 00:10:40.925 "data_offset": 2048, 00:10:40.925 "data_size": 63488 00:10:40.925 }, 00:10:40.925 { 00:10:40.926 "name": "BaseBdev3", 00:10:40.926 "uuid": "d7bf13db-ef8d-4c04-9507-dd3d58bfdde2", 00:10:40.926 "is_configured": true, 00:10:40.926 "data_offset": 2048, 00:10:40.926 "data_size": 63488 00:10:40.926 }, 00:10:40.926 { 00:10:40.926 "name": "BaseBdev4", 00:10:40.926 "uuid": "0139925c-cdd8-4942-bd9a-b13e2975545b", 00:10:40.926 "is_configured": true, 00:10:40.926 "data_offset": 2048, 00:10:40.926 "data_size": 63488 00:10:40.926 } 00:10:40.926 ] 00:10:40.926 }' 00:10:40.926 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.926 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.186 [2024-11-26 12:53:58.760018] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.186 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.186 "name": "Existed_Raid", 00:10:41.186 "aliases": [ 00:10:41.186 "96fceb23-7d73-488e-a440-6349b3fa776d" 00:10:41.186 ], 00:10:41.186 "product_name": "Raid Volume", 00:10:41.186 "block_size": 512, 00:10:41.186 "num_blocks": 63488, 00:10:41.186 "uuid": "96fceb23-7d73-488e-a440-6349b3fa776d", 00:10:41.186 "assigned_rate_limits": { 00:10:41.186 "rw_ios_per_sec": 0, 00:10:41.186 "rw_mbytes_per_sec": 0, 00:10:41.186 "r_mbytes_per_sec": 0, 00:10:41.186 "w_mbytes_per_sec": 0 00:10:41.186 }, 00:10:41.186 "claimed": false, 00:10:41.186 "zoned": false, 00:10:41.186 "supported_io_types": { 00:10:41.186 "read": true, 00:10:41.186 "write": true, 00:10:41.186 "unmap": false, 00:10:41.186 "flush": false, 00:10:41.186 "reset": true, 00:10:41.186 "nvme_admin": false, 00:10:41.186 "nvme_io": false, 00:10:41.186 "nvme_io_md": false, 00:10:41.186 "write_zeroes": true, 00:10:41.186 "zcopy": false, 00:10:41.186 "get_zone_info": false, 00:10:41.186 "zone_management": false, 00:10:41.186 "zone_append": false, 00:10:41.186 "compare": false, 00:10:41.186 "compare_and_write": false, 00:10:41.186 "abort": false, 00:10:41.186 "seek_hole": false, 00:10:41.186 "seek_data": false, 00:10:41.186 "copy": false, 00:10:41.186 "nvme_iov_md": false 00:10:41.186 }, 00:10:41.186 "memory_domains": [ 00:10:41.186 { 00:10:41.186 "dma_device_id": "system", 00:10:41.186 "dma_device_type": 1 00:10:41.186 }, 00:10:41.186 { 00:10:41.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.186 "dma_device_type": 2 00:10:41.186 }, 00:10:41.186 { 00:10:41.186 "dma_device_id": "system", 00:10:41.186 "dma_device_type": 1 00:10:41.186 }, 00:10:41.186 { 00:10:41.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.186 "dma_device_type": 2 00:10:41.186 }, 00:10:41.186 { 00:10:41.186 "dma_device_id": "system", 00:10:41.186 "dma_device_type": 1 00:10:41.186 }, 00:10:41.186 { 00:10:41.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.186 "dma_device_type": 2 00:10:41.186 }, 00:10:41.186 { 00:10:41.186 "dma_device_id": "system", 00:10:41.186 "dma_device_type": 1 00:10:41.186 }, 00:10:41.186 { 00:10:41.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.186 "dma_device_type": 2 00:10:41.186 } 00:10:41.186 ], 00:10:41.186 "driver_specific": { 00:10:41.186 "raid": { 00:10:41.186 "uuid": "96fceb23-7d73-488e-a440-6349b3fa776d", 00:10:41.186 "strip_size_kb": 0, 00:10:41.186 "state": "online", 00:10:41.186 "raid_level": "raid1", 00:10:41.186 "superblock": true, 00:10:41.186 "num_base_bdevs": 4, 00:10:41.186 "num_base_bdevs_discovered": 4, 00:10:41.186 "num_base_bdevs_operational": 4, 00:10:41.186 "base_bdevs_list": [ 00:10:41.186 { 00:10:41.186 "name": "BaseBdev1", 00:10:41.186 "uuid": "db06780e-c8e9-44e9-9ad0-4f31ddc80d50", 00:10:41.186 "is_configured": true, 00:10:41.186 "data_offset": 2048, 00:10:41.186 "data_size": 63488 00:10:41.186 }, 00:10:41.186 { 00:10:41.186 "name": "BaseBdev2", 00:10:41.186 "uuid": "1e01c423-1e82-4118-b895-77d85c43d944", 00:10:41.186 "is_configured": true, 00:10:41.186 "data_offset": 2048, 00:10:41.186 "data_size": 63488 00:10:41.186 }, 00:10:41.186 { 00:10:41.186 "name": "BaseBdev3", 00:10:41.186 "uuid": "d7bf13db-ef8d-4c04-9507-dd3d58bfdde2", 00:10:41.186 "is_configured": true, 00:10:41.186 "data_offset": 2048, 00:10:41.186 "data_size": 63488 00:10:41.186 }, 00:10:41.186 { 00:10:41.186 "name": "BaseBdev4", 00:10:41.186 "uuid": "0139925c-cdd8-4942-bd9a-b13e2975545b", 00:10:41.186 "is_configured": true, 00:10:41.186 "data_offset": 2048, 00:10:41.186 "data_size": 63488 00:10:41.186 } 00:10:41.186 ] 00:10:41.186 } 00:10:41.187 } 00:10:41.187 }' 00:10:41.187 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.187 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:41.187 BaseBdev2 00:10:41.187 BaseBdev3 00:10:41.187 BaseBdev4' 00:10:41.187 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.187 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.187 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.447 12:53:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.447 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.447 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.447 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.447 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:41.447 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.448 [2024-11-26 12:53:59.071258] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.448 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.715 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.715 "name": "Existed_Raid", 00:10:41.715 "uuid": "96fceb23-7d73-488e-a440-6349b3fa776d", 00:10:41.715 "strip_size_kb": 0, 00:10:41.715 "state": "online", 00:10:41.715 "raid_level": "raid1", 00:10:41.715 "superblock": true, 00:10:41.715 "num_base_bdevs": 4, 00:10:41.715 "num_base_bdevs_discovered": 3, 00:10:41.715 "num_base_bdevs_operational": 3, 00:10:41.715 "base_bdevs_list": [ 00:10:41.715 { 00:10:41.715 "name": null, 00:10:41.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.715 "is_configured": false, 00:10:41.715 "data_offset": 0, 00:10:41.715 "data_size": 63488 00:10:41.715 }, 00:10:41.715 { 00:10:41.715 "name": "BaseBdev2", 00:10:41.715 "uuid": "1e01c423-1e82-4118-b895-77d85c43d944", 00:10:41.715 "is_configured": true, 00:10:41.715 "data_offset": 2048, 00:10:41.715 "data_size": 63488 00:10:41.715 }, 00:10:41.715 { 00:10:41.715 "name": "BaseBdev3", 00:10:41.715 "uuid": "d7bf13db-ef8d-4c04-9507-dd3d58bfdde2", 00:10:41.715 "is_configured": true, 00:10:41.715 "data_offset": 2048, 00:10:41.715 "data_size": 63488 00:10:41.715 }, 00:10:41.715 { 00:10:41.715 "name": "BaseBdev4", 00:10:41.715 "uuid": "0139925c-cdd8-4942-bd9a-b13e2975545b", 00:10:41.715 "is_configured": true, 00:10:41.715 "data_offset": 2048, 00:10:41.715 "data_size": 63488 00:10:41.715 } 00:10:41.715 ] 00:10:41.715 }' 00:10:41.715 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.715 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.978 [2024-11-26 12:53:59.577709] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.978 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.978 [2024-11-26 12:53:59.648534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.239 [2024-11-26 12:53:59.715558] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:42.239 [2024-11-26 12:53:59.715664] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.239 [2024-11-26 12:53:59.726681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.239 [2024-11-26 12:53:59.726783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.239 [2024-11-26 12:53:59.726834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.239 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.240 BaseBdev2 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.240 [ 00:10:42.240 { 00:10:42.240 "name": "BaseBdev2", 00:10:42.240 "aliases": [ 00:10:42.240 "4113112d-a5d8-4c7d-9bb4-8936ce128518" 00:10:42.240 ], 00:10:42.240 "product_name": "Malloc disk", 00:10:42.240 "block_size": 512, 00:10:42.240 "num_blocks": 65536, 00:10:42.240 "uuid": "4113112d-a5d8-4c7d-9bb4-8936ce128518", 00:10:42.240 "assigned_rate_limits": { 00:10:42.240 "rw_ios_per_sec": 0, 00:10:42.240 "rw_mbytes_per_sec": 0, 00:10:42.240 "r_mbytes_per_sec": 0, 00:10:42.240 "w_mbytes_per_sec": 0 00:10:42.240 }, 00:10:42.240 "claimed": false, 00:10:42.240 "zoned": false, 00:10:42.240 "supported_io_types": { 00:10:42.240 "read": true, 00:10:42.240 "write": true, 00:10:42.240 "unmap": true, 00:10:42.240 "flush": true, 00:10:42.240 "reset": true, 00:10:42.240 "nvme_admin": false, 00:10:42.240 "nvme_io": false, 00:10:42.240 "nvme_io_md": false, 00:10:42.240 "write_zeroes": true, 00:10:42.240 "zcopy": true, 00:10:42.240 "get_zone_info": false, 00:10:42.240 "zone_management": false, 00:10:42.240 "zone_append": false, 00:10:42.240 "compare": false, 00:10:42.240 "compare_and_write": false, 00:10:42.240 "abort": true, 00:10:42.240 "seek_hole": false, 00:10:42.240 "seek_data": false, 00:10:42.240 "copy": true, 00:10:42.240 "nvme_iov_md": false 00:10:42.240 }, 00:10:42.240 "memory_domains": [ 00:10:42.240 { 00:10:42.240 "dma_device_id": "system", 00:10:42.240 "dma_device_type": 1 00:10:42.240 }, 00:10:42.240 { 00:10:42.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.240 "dma_device_type": 2 00:10:42.240 } 00:10:42.240 ], 00:10:42.240 "driver_specific": {} 00:10:42.240 } 00:10:42.240 ] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.240 BaseBdev3 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.240 [ 00:10:42.240 { 00:10:42.240 "name": "BaseBdev3", 00:10:42.240 "aliases": [ 00:10:42.240 "aa934f6f-cc84-4e3d-b285-35c400657242" 00:10:42.240 ], 00:10:42.240 "product_name": "Malloc disk", 00:10:42.240 "block_size": 512, 00:10:42.240 "num_blocks": 65536, 00:10:42.240 "uuid": "aa934f6f-cc84-4e3d-b285-35c400657242", 00:10:42.240 "assigned_rate_limits": { 00:10:42.240 "rw_ios_per_sec": 0, 00:10:42.240 "rw_mbytes_per_sec": 0, 00:10:42.240 "r_mbytes_per_sec": 0, 00:10:42.240 "w_mbytes_per_sec": 0 00:10:42.240 }, 00:10:42.240 "claimed": false, 00:10:42.240 "zoned": false, 00:10:42.240 "supported_io_types": { 00:10:42.240 "read": true, 00:10:42.240 "write": true, 00:10:42.240 "unmap": true, 00:10:42.240 "flush": true, 00:10:42.240 "reset": true, 00:10:42.240 "nvme_admin": false, 00:10:42.240 "nvme_io": false, 00:10:42.240 "nvme_io_md": false, 00:10:42.240 "write_zeroes": true, 00:10:42.240 "zcopy": true, 00:10:42.240 "get_zone_info": false, 00:10:42.240 "zone_management": false, 00:10:42.240 "zone_append": false, 00:10:42.240 "compare": false, 00:10:42.240 "compare_and_write": false, 00:10:42.240 "abort": true, 00:10:42.240 "seek_hole": false, 00:10:42.240 "seek_data": false, 00:10:42.240 "copy": true, 00:10:42.240 "nvme_iov_md": false 00:10:42.240 }, 00:10:42.240 "memory_domains": [ 00:10:42.240 { 00:10:42.240 "dma_device_id": "system", 00:10:42.240 "dma_device_type": 1 00:10:42.240 }, 00:10:42.240 { 00:10:42.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.240 "dma_device_type": 2 00:10:42.240 } 00:10:42.240 ], 00:10:42.240 "driver_specific": {} 00:10:42.240 } 00:10:42.240 ] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.240 BaseBdev4 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.240 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.501 [ 00:10:42.501 { 00:10:42.501 "name": "BaseBdev4", 00:10:42.501 "aliases": [ 00:10:42.501 "2a9f8408-7c42-4847-8e4a-54232cf37942" 00:10:42.501 ], 00:10:42.501 "product_name": "Malloc disk", 00:10:42.501 "block_size": 512, 00:10:42.501 "num_blocks": 65536, 00:10:42.501 "uuid": "2a9f8408-7c42-4847-8e4a-54232cf37942", 00:10:42.501 "assigned_rate_limits": { 00:10:42.501 "rw_ios_per_sec": 0, 00:10:42.501 "rw_mbytes_per_sec": 0, 00:10:42.501 "r_mbytes_per_sec": 0, 00:10:42.501 "w_mbytes_per_sec": 0 00:10:42.501 }, 00:10:42.501 "claimed": false, 00:10:42.501 "zoned": false, 00:10:42.501 "supported_io_types": { 00:10:42.501 "read": true, 00:10:42.501 "write": true, 00:10:42.501 "unmap": true, 00:10:42.501 "flush": true, 00:10:42.501 "reset": true, 00:10:42.501 "nvme_admin": false, 00:10:42.501 "nvme_io": false, 00:10:42.501 "nvme_io_md": false, 00:10:42.501 "write_zeroes": true, 00:10:42.501 "zcopy": true, 00:10:42.501 "get_zone_info": false, 00:10:42.501 "zone_management": false, 00:10:42.501 "zone_append": false, 00:10:42.501 "compare": false, 00:10:42.501 "compare_and_write": false, 00:10:42.501 "abort": true, 00:10:42.501 "seek_hole": false, 00:10:42.501 "seek_data": false, 00:10:42.501 "copy": true, 00:10:42.501 "nvme_iov_md": false 00:10:42.501 }, 00:10:42.501 "memory_domains": [ 00:10:42.501 { 00:10:42.501 "dma_device_id": "system", 00:10:42.501 "dma_device_type": 1 00:10:42.501 }, 00:10:42.501 { 00:10:42.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.501 "dma_device_type": 2 00:10:42.501 } 00:10:42.501 ], 00:10:42.501 "driver_specific": {} 00:10:42.501 } 00:10:42.501 ] 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.501 [2024-11-26 12:53:59.942340] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.501 [2024-11-26 12:53:59.942764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.501 [2024-11-26 12:53:59.942795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.501 [2024-11-26 12:53:59.944510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.501 [2024-11-26 12:53:59.944554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.501 "name": "Existed_Raid", 00:10:42.501 "uuid": "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30", 00:10:42.501 "strip_size_kb": 0, 00:10:42.501 "state": "configuring", 00:10:42.501 "raid_level": "raid1", 00:10:42.501 "superblock": true, 00:10:42.501 "num_base_bdevs": 4, 00:10:42.501 "num_base_bdevs_discovered": 3, 00:10:42.501 "num_base_bdevs_operational": 4, 00:10:42.501 "base_bdevs_list": [ 00:10:42.501 { 00:10:42.501 "name": "BaseBdev1", 00:10:42.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.501 "is_configured": false, 00:10:42.501 "data_offset": 0, 00:10:42.501 "data_size": 0 00:10:42.501 }, 00:10:42.501 { 00:10:42.501 "name": "BaseBdev2", 00:10:42.501 "uuid": "4113112d-a5d8-4c7d-9bb4-8936ce128518", 00:10:42.501 "is_configured": true, 00:10:42.501 "data_offset": 2048, 00:10:42.501 "data_size": 63488 00:10:42.501 }, 00:10:42.501 { 00:10:42.501 "name": "BaseBdev3", 00:10:42.501 "uuid": "aa934f6f-cc84-4e3d-b285-35c400657242", 00:10:42.501 "is_configured": true, 00:10:42.501 "data_offset": 2048, 00:10:42.501 "data_size": 63488 00:10:42.501 }, 00:10:42.501 { 00:10:42.501 "name": "BaseBdev4", 00:10:42.501 "uuid": "2a9f8408-7c42-4847-8e4a-54232cf37942", 00:10:42.501 "is_configured": true, 00:10:42.501 "data_offset": 2048, 00:10:42.501 "data_size": 63488 00:10:42.501 } 00:10:42.501 ] 00:10:42.501 }' 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.501 12:53:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.762 [2024-11-26 12:54:00.417490] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.762 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.022 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.022 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.022 "name": "Existed_Raid", 00:10:43.022 "uuid": "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30", 00:10:43.022 "strip_size_kb": 0, 00:10:43.022 "state": "configuring", 00:10:43.022 "raid_level": "raid1", 00:10:43.022 "superblock": true, 00:10:43.022 "num_base_bdevs": 4, 00:10:43.022 "num_base_bdevs_discovered": 2, 00:10:43.022 "num_base_bdevs_operational": 4, 00:10:43.022 "base_bdevs_list": [ 00:10:43.022 { 00:10:43.022 "name": "BaseBdev1", 00:10:43.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.022 "is_configured": false, 00:10:43.022 "data_offset": 0, 00:10:43.022 "data_size": 0 00:10:43.022 }, 00:10:43.022 { 00:10:43.022 "name": null, 00:10:43.022 "uuid": "4113112d-a5d8-4c7d-9bb4-8936ce128518", 00:10:43.022 "is_configured": false, 00:10:43.022 "data_offset": 0, 00:10:43.022 "data_size": 63488 00:10:43.022 }, 00:10:43.022 { 00:10:43.022 "name": "BaseBdev3", 00:10:43.022 "uuid": "aa934f6f-cc84-4e3d-b285-35c400657242", 00:10:43.022 "is_configured": true, 00:10:43.022 "data_offset": 2048, 00:10:43.022 "data_size": 63488 00:10:43.022 }, 00:10:43.022 { 00:10:43.022 "name": "BaseBdev4", 00:10:43.022 "uuid": "2a9f8408-7c42-4847-8e4a-54232cf37942", 00:10:43.022 "is_configured": true, 00:10:43.022 "data_offset": 2048, 00:10:43.022 "data_size": 63488 00:10:43.022 } 00:10:43.022 ] 00:10:43.022 }' 00:10:43.022 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.022 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.283 [2024-11-26 12:54:00.879690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.283 BaseBdev1 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.283 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.283 [ 00:10:43.283 { 00:10:43.283 "name": "BaseBdev1", 00:10:43.283 "aliases": [ 00:10:43.283 "b80303ad-b84c-426f-88c9-7eea7718171f" 00:10:43.283 ], 00:10:43.283 "product_name": "Malloc disk", 00:10:43.283 "block_size": 512, 00:10:43.283 "num_blocks": 65536, 00:10:43.283 "uuid": "b80303ad-b84c-426f-88c9-7eea7718171f", 00:10:43.283 "assigned_rate_limits": { 00:10:43.283 "rw_ios_per_sec": 0, 00:10:43.283 "rw_mbytes_per_sec": 0, 00:10:43.283 "r_mbytes_per_sec": 0, 00:10:43.283 "w_mbytes_per_sec": 0 00:10:43.283 }, 00:10:43.283 "claimed": true, 00:10:43.283 "claim_type": "exclusive_write", 00:10:43.283 "zoned": false, 00:10:43.283 "supported_io_types": { 00:10:43.283 "read": true, 00:10:43.283 "write": true, 00:10:43.283 "unmap": true, 00:10:43.283 "flush": true, 00:10:43.283 "reset": true, 00:10:43.283 "nvme_admin": false, 00:10:43.283 "nvme_io": false, 00:10:43.283 "nvme_io_md": false, 00:10:43.283 "write_zeroes": true, 00:10:43.283 "zcopy": true, 00:10:43.283 "get_zone_info": false, 00:10:43.283 "zone_management": false, 00:10:43.283 "zone_append": false, 00:10:43.283 "compare": false, 00:10:43.283 "compare_and_write": false, 00:10:43.283 "abort": true, 00:10:43.283 "seek_hole": false, 00:10:43.283 "seek_data": false, 00:10:43.283 "copy": true, 00:10:43.283 "nvme_iov_md": false 00:10:43.283 }, 00:10:43.283 "memory_domains": [ 00:10:43.283 { 00:10:43.283 "dma_device_id": "system", 00:10:43.283 "dma_device_type": 1 00:10:43.283 }, 00:10:43.284 { 00:10:43.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.284 "dma_device_type": 2 00:10:43.284 } 00:10:43.284 ], 00:10:43.284 "driver_specific": {} 00:10:43.284 } 00:10:43.284 ] 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.284 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.544 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.544 "name": "Existed_Raid", 00:10:43.544 "uuid": "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30", 00:10:43.544 "strip_size_kb": 0, 00:10:43.545 "state": "configuring", 00:10:43.545 "raid_level": "raid1", 00:10:43.545 "superblock": true, 00:10:43.545 "num_base_bdevs": 4, 00:10:43.545 "num_base_bdevs_discovered": 3, 00:10:43.545 "num_base_bdevs_operational": 4, 00:10:43.545 "base_bdevs_list": [ 00:10:43.545 { 00:10:43.545 "name": "BaseBdev1", 00:10:43.545 "uuid": "b80303ad-b84c-426f-88c9-7eea7718171f", 00:10:43.545 "is_configured": true, 00:10:43.545 "data_offset": 2048, 00:10:43.545 "data_size": 63488 00:10:43.545 }, 00:10:43.545 { 00:10:43.545 "name": null, 00:10:43.545 "uuid": "4113112d-a5d8-4c7d-9bb4-8936ce128518", 00:10:43.545 "is_configured": false, 00:10:43.545 "data_offset": 0, 00:10:43.545 "data_size": 63488 00:10:43.545 }, 00:10:43.545 { 00:10:43.545 "name": "BaseBdev3", 00:10:43.545 "uuid": "aa934f6f-cc84-4e3d-b285-35c400657242", 00:10:43.545 "is_configured": true, 00:10:43.545 "data_offset": 2048, 00:10:43.545 "data_size": 63488 00:10:43.545 }, 00:10:43.545 { 00:10:43.545 "name": "BaseBdev4", 00:10:43.545 "uuid": "2a9f8408-7c42-4847-8e4a-54232cf37942", 00:10:43.545 "is_configured": true, 00:10:43.545 "data_offset": 2048, 00:10:43.545 "data_size": 63488 00:10:43.545 } 00:10:43.545 ] 00:10:43.545 }' 00:10:43.545 12:54:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.545 12:54:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.806 [2024-11-26 12:54:01.358961] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.806 "name": "Existed_Raid", 00:10:43.806 "uuid": "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30", 00:10:43.806 "strip_size_kb": 0, 00:10:43.806 "state": "configuring", 00:10:43.806 "raid_level": "raid1", 00:10:43.806 "superblock": true, 00:10:43.806 "num_base_bdevs": 4, 00:10:43.806 "num_base_bdevs_discovered": 2, 00:10:43.806 "num_base_bdevs_operational": 4, 00:10:43.806 "base_bdevs_list": [ 00:10:43.806 { 00:10:43.806 "name": "BaseBdev1", 00:10:43.806 "uuid": "b80303ad-b84c-426f-88c9-7eea7718171f", 00:10:43.806 "is_configured": true, 00:10:43.806 "data_offset": 2048, 00:10:43.806 "data_size": 63488 00:10:43.806 }, 00:10:43.806 { 00:10:43.806 "name": null, 00:10:43.806 "uuid": "4113112d-a5d8-4c7d-9bb4-8936ce128518", 00:10:43.806 "is_configured": false, 00:10:43.806 "data_offset": 0, 00:10:43.806 "data_size": 63488 00:10:43.806 }, 00:10:43.806 { 00:10:43.806 "name": null, 00:10:43.806 "uuid": "aa934f6f-cc84-4e3d-b285-35c400657242", 00:10:43.806 "is_configured": false, 00:10:43.806 "data_offset": 0, 00:10:43.806 "data_size": 63488 00:10:43.806 }, 00:10:43.806 { 00:10:43.806 "name": "BaseBdev4", 00:10:43.806 "uuid": "2a9f8408-7c42-4847-8e4a-54232cf37942", 00:10:43.806 "is_configured": true, 00:10:43.806 "data_offset": 2048, 00:10:43.806 "data_size": 63488 00:10:43.806 } 00:10:43.806 ] 00:10:43.806 }' 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.806 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.376 [2024-11-26 12:54:01.786296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.376 "name": "Existed_Raid", 00:10:44.376 "uuid": "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30", 00:10:44.376 "strip_size_kb": 0, 00:10:44.376 "state": "configuring", 00:10:44.376 "raid_level": "raid1", 00:10:44.376 "superblock": true, 00:10:44.376 "num_base_bdevs": 4, 00:10:44.376 "num_base_bdevs_discovered": 3, 00:10:44.376 "num_base_bdevs_operational": 4, 00:10:44.376 "base_bdevs_list": [ 00:10:44.376 { 00:10:44.376 "name": "BaseBdev1", 00:10:44.376 "uuid": "b80303ad-b84c-426f-88c9-7eea7718171f", 00:10:44.376 "is_configured": true, 00:10:44.376 "data_offset": 2048, 00:10:44.376 "data_size": 63488 00:10:44.376 }, 00:10:44.376 { 00:10:44.376 "name": null, 00:10:44.376 "uuid": "4113112d-a5d8-4c7d-9bb4-8936ce128518", 00:10:44.376 "is_configured": false, 00:10:44.376 "data_offset": 0, 00:10:44.376 "data_size": 63488 00:10:44.376 }, 00:10:44.376 { 00:10:44.376 "name": "BaseBdev3", 00:10:44.376 "uuid": "aa934f6f-cc84-4e3d-b285-35c400657242", 00:10:44.376 "is_configured": true, 00:10:44.376 "data_offset": 2048, 00:10:44.376 "data_size": 63488 00:10:44.376 }, 00:10:44.376 { 00:10:44.376 "name": "BaseBdev4", 00:10:44.376 "uuid": "2a9f8408-7c42-4847-8e4a-54232cf37942", 00:10:44.376 "is_configured": true, 00:10:44.376 "data_offset": 2048, 00:10:44.376 "data_size": 63488 00:10:44.376 } 00:10:44.376 ] 00:10:44.376 }' 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.376 12:54:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.637 [2024-11-26 12:54:02.261483] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.637 "name": "Existed_Raid", 00:10:44.637 "uuid": "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30", 00:10:44.637 "strip_size_kb": 0, 00:10:44.637 "state": "configuring", 00:10:44.637 "raid_level": "raid1", 00:10:44.637 "superblock": true, 00:10:44.637 "num_base_bdevs": 4, 00:10:44.637 "num_base_bdevs_discovered": 2, 00:10:44.637 "num_base_bdevs_operational": 4, 00:10:44.637 "base_bdevs_list": [ 00:10:44.637 { 00:10:44.637 "name": null, 00:10:44.637 "uuid": "b80303ad-b84c-426f-88c9-7eea7718171f", 00:10:44.637 "is_configured": false, 00:10:44.637 "data_offset": 0, 00:10:44.637 "data_size": 63488 00:10:44.637 }, 00:10:44.637 { 00:10:44.637 "name": null, 00:10:44.637 "uuid": "4113112d-a5d8-4c7d-9bb4-8936ce128518", 00:10:44.637 "is_configured": false, 00:10:44.637 "data_offset": 0, 00:10:44.637 "data_size": 63488 00:10:44.637 }, 00:10:44.637 { 00:10:44.637 "name": "BaseBdev3", 00:10:44.637 "uuid": "aa934f6f-cc84-4e3d-b285-35c400657242", 00:10:44.637 "is_configured": true, 00:10:44.637 "data_offset": 2048, 00:10:44.637 "data_size": 63488 00:10:44.637 }, 00:10:44.637 { 00:10:44.637 "name": "BaseBdev4", 00:10:44.637 "uuid": "2a9f8408-7c42-4847-8e4a-54232cf37942", 00:10:44.637 "is_configured": true, 00:10:44.637 "data_offset": 2048, 00:10:44.637 "data_size": 63488 00:10:44.637 } 00:10:44.637 ] 00:10:44.637 }' 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.637 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.206 [2024-11-26 12:54:02.731187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.206 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.207 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.207 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.207 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.207 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.207 "name": "Existed_Raid", 00:10:45.207 "uuid": "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30", 00:10:45.207 "strip_size_kb": 0, 00:10:45.207 "state": "configuring", 00:10:45.207 "raid_level": "raid1", 00:10:45.207 "superblock": true, 00:10:45.207 "num_base_bdevs": 4, 00:10:45.207 "num_base_bdevs_discovered": 3, 00:10:45.207 "num_base_bdevs_operational": 4, 00:10:45.207 "base_bdevs_list": [ 00:10:45.207 { 00:10:45.207 "name": null, 00:10:45.207 "uuid": "b80303ad-b84c-426f-88c9-7eea7718171f", 00:10:45.207 "is_configured": false, 00:10:45.207 "data_offset": 0, 00:10:45.207 "data_size": 63488 00:10:45.207 }, 00:10:45.207 { 00:10:45.207 "name": "BaseBdev2", 00:10:45.207 "uuid": "4113112d-a5d8-4c7d-9bb4-8936ce128518", 00:10:45.207 "is_configured": true, 00:10:45.207 "data_offset": 2048, 00:10:45.207 "data_size": 63488 00:10:45.207 }, 00:10:45.207 { 00:10:45.207 "name": "BaseBdev3", 00:10:45.207 "uuid": "aa934f6f-cc84-4e3d-b285-35c400657242", 00:10:45.207 "is_configured": true, 00:10:45.207 "data_offset": 2048, 00:10:45.207 "data_size": 63488 00:10:45.207 }, 00:10:45.207 { 00:10:45.207 "name": "BaseBdev4", 00:10:45.207 "uuid": "2a9f8408-7c42-4847-8e4a-54232cf37942", 00:10:45.207 "is_configured": true, 00:10:45.207 "data_offset": 2048, 00:10:45.207 "data_size": 63488 00:10:45.207 } 00:10:45.207 ] 00:10:45.207 }' 00:10:45.207 12:54:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.207 12:54:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.776 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.776 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.776 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.776 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.776 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.776 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:45.776 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b80303ad-b84c-426f-88c9-7eea7718171f 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.777 [2024-11-26 12:54:03.257181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:45.777 [2024-11-26 12:54:03.257467] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:45.777 [2024-11-26 12:54:03.257514] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:45.777 NewBaseBdev 00:10:45.777 [2024-11-26 12:54:03.257784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:45.777 [2024-11-26 12:54:03.257924] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:45.777 [2024-11-26 12:54:03.257941] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:45.777 [2024-11-26 12:54:03.258040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.777 [ 00:10:45.777 { 00:10:45.777 "name": "NewBaseBdev", 00:10:45.777 "aliases": [ 00:10:45.777 "b80303ad-b84c-426f-88c9-7eea7718171f" 00:10:45.777 ], 00:10:45.777 "product_name": "Malloc disk", 00:10:45.777 "block_size": 512, 00:10:45.777 "num_blocks": 65536, 00:10:45.777 "uuid": "b80303ad-b84c-426f-88c9-7eea7718171f", 00:10:45.777 "assigned_rate_limits": { 00:10:45.777 "rw_ios_per_sec": 0, 00:10:45.777 "rw_mbytes_per_sec": 0, 00:10:45.777 "r_mbytes_per_sec": 0, 00:10:45.777 "w_mbytes_per_sec": 0 00:10:45.777 }, 00:10:45.777 "claimed": true, 00:10:45.777 "claim_type": "exclusive_write", 00:10:45.777 "zoned": false, 00:10:45.777 "supported_io_types": { 00:10:45.777 "read": true, 00:10:45.777 "write": true, 00:10:45.777 "unmap": true, 00:10:45.777 "flush": true, 00:10:45.777 "reset": true, 00:10:45.777 "nvme_admin": false, 00:10:45.777 "nvme_io": false, 00:10:45.777 "nvme_io_md": false, 00:10:45.777 "write_zeroes": true, 00:10:45.777 "zcopy": true, 00:10:45.777 "get_zone_info": false, 00:10:45.777 "zone_management": false, 00:10:45.777 "zone_append": false, 00:10:45.777 "compare": false, 00:10:45.777 "compare_and_write": false, 00:10:45.777 "abort": true, 00:10:45.777 "seek_hole": false, 00:10:45.777 "seek_data": false, 00:10:45.777 "copy": true, 00:10:45.777 "nvme_iov_md": false 00:10:45.777 }, 00:10:45.777 "memory_domains": [ 00:10:45.777 { 00:10:45.777 "dma_device_id": "system", 00:10:45.777 "dma_device_type": 1 00:10:45.777 }, 00:10:45.777 { 00:10:45.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.777 "dma_device_type": 2 00:10:45.777 } 00:10:45.777 ], 00:10:45.777 "driver_specific": {} 00:10:45.777 } 00:10:45.777 ] 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.777 "name": "Existed_Raid", 00:10:45.777 "uuid": "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30", 00:10:45.777 "strip_size_kb": 0, 00:10:45.777 "state": "online", 00:10:45.777 "raid_level": "raid1", 00:10:45.777 "superblock": true, 00:10:45.777 "num_base_bdevs": 4, 00:10:45.777 "num_base_bdevs_discovered": 4, 00:10:45.777 "num_base_bdevs_operational": 4, 00:10:45.777 "base_bdevs_list": [ 00:10:45.777 { 00:10:45.777 "name": "NewBaseBdev", 00:10:45.777 "uuid": "b80303ad-b84c-426f-88c9-7eea7718171f", 00:10:45.777 "is_configured": true, 00:10:45.777 "data_offset": 2048, 00:10:45.777 "data_size": 63488 00:10:45.777 }, 00:10:45.777 { 00:10:45.777 "name": "BaseBdev2", 00:10:45.777 "uuid": "4113112d-a5d8-4c7d-9bb4-8936ce128518", 00:10:45.777 "is_configured": true, 00:10:45.777 "data_offset": 2048, 00:10:45.777 "data_size": 63488 00:10:45.777 }, 00:10:45.777 { 00:10:45.777 "name": "BaseBdev3", 00:10:45.777 "uuid": "aa934f6f-cc84-4e3d-b285-35c400657242", 00:10:45.777 "is_configured": true, 00:10:45.777 "data_offset": 2048, 00:10:45.777 "data_size": 63488 00:10:45.777 }, 00:10:45.777 { 00:10:45.777 "name": "BaseBdev4", 00:10:45.777 "uuid": "2a9f8408-7c42-4847-8e4a-54232cf37942", 00:10:45.777 "is_configured": true, 00:10:45.777 "data_offset": 2048, 00:10:45.777 "data_size": 63488 00:10:45.777 } 00:10:45.777 ] 00:10:45.777 }' 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.777 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.037 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.037 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.037 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.037 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.037 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.037 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.037 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.037 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.037 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.037 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.037 [2024-11-26 12:54:03.696731] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.298 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.298 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.298 "name": "Existed_Raid", 00:10:46.298 "aliases": [ 00:10:46.298 "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30" 00:10:46.298 ], 00:10:46.298 "product_name": "Raid Volume", 00:10:46.298 "block_size": 512, 00:10:46.298 "num_blocks": 63488, 00:10:46.298 "uuid": "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30", 00:10:46.298 "assigned_rate_limits": { 00:10:46.298 "rw_ios_per_sec": 0, 00:10:46.298 "rw_mbytes_per_sec": 0, 00:10:46.298 "r_mbytes_per_sec": 0, 00:10:46.298 "w_mbytes_per_sec": 0 00:10:46.298 }, 00:10:46.298 "claimed": false, 00:10:46.298 "zoned": false, 00:10:46.298 "supported_io_types": { 00:10:46.298 "read": true, 00:10:46.298 "write": true, 00:10:46.298 "unmap": false, 00:10:46.298 "flush": false, 00:10:46.298 "reset": true, 00:10:46.298 "nvme_admin": false, 00:10:46.298 "nvme_io": false, 00:10:46.298 "nvme_io_md": false, 00:10:46.298 "write_zeroes": true, 00:10:46.298 "zcopy": false, 00:10:46.298 "get_zone_info": false, 00:10:46.298 "zone_management": false, 00:10:46.298 "zone_append": false, 00:10:46.298 "compare": false, 00:10:46.298 "compare_and_write": false, 00:10:46.298 "abort": false, 00:10:46.298 "seek_hole": false, 00:10:46.298 "seek_data": false, 00:10:46.298 "copy": false, 00:10:46.298 "nvme_iov_md": false 00:10:46.298 }, 00:10:46.298 "memory_domains": [ 00:10:46.298 { 00:10:46.298 "dma_device_id": "system", 00:10:46.298 "dma_device_type": 1 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.298 "dma_device_type": 2 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "dma_device_id": "system", 00:10:46.298 "dma_device_type": 1 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.298 "dma_device_type": 2 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "dma_device_id": "system", 00:10:46.298 "dma_device_type": 1 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.298 "dma_device_type": 2 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "dma_device_id": "system", 00:10:46.298 "dma_device_type": 1 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.298 "dma_device_type": 2 00:10:46.298 } 00:10:46.298 ], 00:10:46.298 "driver_specific": { 00:10:46.298 "raid": { 00:10:46.298 "uuid": "6b1c0fd7-97ce-4d86-a8b4-868b91dd4c30", 00:10:46.298 "strip_size_kb": 0, 00:10:46.298 "state": "online", 00:10:46.298 "raid_level": "raid1", 00:10:46.298 "superblock": true, 00:10:46.298 "num_base_bdevs": 4, 00:10:46.298 "num_base_bdevs_discovered": 4, 00:10:46.298 "num_base_bdevs_operational": 4, 00:10:46.298 "base_bdevs_list": [ 00:10:46.298 { 00:10:46.298 "name": "NewBaseBdev", 00:10:46.298 "uuid": "b80303ad-b84c-426f-88c9-7eea7718171f", 00:10:46.298 "is_configured": true, 00:10:46.298 "data_offset": 2048, 00:10:46.298 "data_size": 63488 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "name": "BaseBdev2", 00:10:46.298 "uuid": "4113112d-a5d8-4c7d-9bb4-8936ce128518", 00:10:46.298 "is_configured": true, 00:10:46.298 "data_offset": 2048, 00:10:46.298 "data_size": 63488 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "name": "BaseBdev3", 00:10:46.298 "uuid": "aa934f6f-cc84-4e3d-b285-35c400657242", 00:10:46.298 "is_configured": true, 00:10:46.298 "data_offset": 2048, 00:10:46.298 "data_size": 63488 00:10:46.298 }, 00:10:46.298 { 00:10:46.298 "name": "BaseBdev4", 00:10:46.298 "uuid": "2a9f8408-7c42-4847-8e4a-54232cf37942", 00:10:46.298 "is_configured": true, 00:10:46.298 "data_offset": 2048, 00:10:46.298 "data_size": 63488 00:10:46.298 } 00:10:46.298 ] 00:10:46.298 } 00:10:46.298 } 00:10:46.298 }' 00:10:46.298 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.298 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:46.298 BaseBdev2 00:10:46.298 BaseBdev3 00:10:46.298 BaseBdev4' 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.299 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.559 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.559 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.559 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.559 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.559 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.559 [2024-11-26 12:54:03.995934] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.559 [2024-11-26 12:54:03.996005] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.559 [2024-11-26 12:54:03.996091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.559 [2024-11-26 12:54:03.996390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.559 [2024-11-26 12:54:03.996450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:46.559 12:54:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.559 12:54:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84839 00:10:46.559 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84839 ']' 00:10:46.559 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84839 00:10:46.559 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:46.559 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.559 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84839 00:10:46.559 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.559 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.559 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84839' 00:10:46.559 killing process with pid 84839 00:10:46.559 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84839 00:10:46.559 [2024-11-26 12:54:04.045380] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.559 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84839 00:10:46.559 [2024-11-26 12:54:04.084842] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.819 12:54:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:46.819 ************************************ 00:10:46.820 END TEST raid_state_function_test_sb 00:10:46.820 ************************************ 00:10:46.820 00:10:46.820 real 0m9.300s 00:10:46.820 user 0m15.863s 00:10:46.820 sys 0m1.994s 00:10:46.820 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.820 12:54:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.820 12:54:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:46.820 12:54:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:46.820 12:54:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.820 12:54:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.820 ************************************ 00:10:46.820 START TEST raid_superblock_test 00:10:46.820 ************************************ 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85488 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85488 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85488 ']' 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.820 12:54:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.820 [2024-11-26 12:54:04.486318] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:46.820 [2024-11-26 12:54:04.486527] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85488 ] 00:10:47.080 [2024-11-26 12:54:04.646415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.080 [2024-11-26 12:54:04.690614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.080 [2024-11-26 12:54:04.732560] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.080 [2024-11-26 12:54:04.732699] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.659 malloc1 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.659 [2024-11-26 12:54:05.322671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:47.659 [2024-11-26 12:54:05.322834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.659 [2024-11-26 12:54:05.322879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:47.659 [2024-11-26 12:54:05.322918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.659 [2024-11-26 12:54:05.325183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.659 [2024-11-26 12:54:05.325258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:47.659 pt1 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.659 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.932 malloc2 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.932 [2024-11-26 12:54:05.359548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:47.932 [2024-11-26 12:54:05.359669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.932 [2024-11-26 12:54:05.359691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:47.932 [2024-11-26 12:54:05.359704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.932 [2024-11-26 12:54:05.362060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.932 [2024-11-26 12:54:05.362096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:47.932 pt2 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.932 malloc3 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.932 [2024-11-26 12:54:05.388055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:47.932 [2024-11-26 12:54:05.388160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.932 [2024-11-26 12:54:05.388222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:47.932 [2024-11-26 12:54:05.388259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.932 [2024-11-26 12:54:05.390282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.932 [2024-11-26 12:54:05.390362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:47.932 pt3 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.932 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 malloc4 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 [2024-11-26 12:54:05.420433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:47.933 [2024-11-26 12:54:05.420530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.933 [2024-11-26 12:54:05.420578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:47.933 [2024-11-26 12:54:05.420609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.933 [2024-11-26 12:54:05.422659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.933 [2024-11-26 12:54:05.422744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:47.933 pt4 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 [2024-11-26 12:54:05.432483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:47.933 [2024-11-26 12:54:05.434336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.933 [2024-11-26 12:54:05.434441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:47.933 [2024-11-26 12:54:05.434499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:47.933 [2024-11-26 12:54:05.434694] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:47.933 [2024-11-26 12:54:05.434746] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:47.933 [2024-11-26 12:54:05.435017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:47.933 [2024-11-26 12:54:05.435201] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:47.933 [2024-11-26 12:54:05.435249] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:47.933 [2024-11-26 12:54:05.435423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.933 "name": "raid_bdev1", 00:10:47.933 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:47.933 "strip_size_kb": 0, 00:10:47.933 "state": "online", 00:10:47.933 "raid_level": "raid1", 00:10:47.933 "superblock": true, 00:10:47.933 "num_base_bdevs": 4, 00:10:47.933 "num_base_bdevs_discovered": 4, 00:10:47.933 "num_base_bdevs_operational": 4, 00:10:47.933 "base_bdevs_list": [ 00:10:47.933 { 00:10:47.933 "name": "pt1", 00:10:47.933 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.933 "is_configured": true, 00:10:47.933 "data_offset": 2048, 00:10:47.933 "data_size": 63488 00:10:47.933 }, 00:10:47.933 { 00:10:47.933 "name": "pt2", 00:10:47.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.933 "is_configured": true, 00:10:47.933 "data_offset": 2048, 00:10:47.933 "data_size": 63488 00:10:47.933 }, 00:10:47.933 { 00:10:47.933 "name": "pt3", 00:10:47.933 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.933 "is_configured": true, 00:10:47.933 "data_offset": 2048, 00:10:47.933 "data_size": 63488 00:10:47.933 }, 00:10:47.933 { 00:10:47.933 "name": "pt4", 00:10:47.933 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:47.933 "is_configured": true, 00:10:47.933 "data_offset": 2048, 00:10:47.933 "data_size": 63488 00:10:47.933 } 00:10:47.933 ] 00:10:47.933 }' 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.933 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.194 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:48.194 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:48.194 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.194 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.194 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.194 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.455 [2024-11-26 12:54:05.879989] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.455 "name": "raid_bdev1", 00:10:48.455 "aliases": [ 00:10:48.455 "709f1621-8fc2-43be-ae07-9fc787359cc7" 00:10:48.455 ], 00:10:48.455 "product_name": "Raid Volume", 00:10:48.455 "block_size": 512, 00:10:48.455 "num_blocks": 63488, 00:10:48.455 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:48.455 "assigned_rate_limits": { 00:10:48.455 "rw_ios_per_sec": 0, 00:10:48.455 "rw_mbytes_per_sec": 0, 00:10:48.455 "r_mbytes_per_sec": 0, 00:10:48.455 "w_mbytes_per_sec": 0 00:10:48.455 }, 00:10:48.455 "claimed": false, 00:10:48.455 "zoned": false, 00:10:48.455 "supported_io_types": { 00:10:48.455 "read": true, 00:10:48.455 "write": true, 00:10:48.455 "unmap": false, 00:10:48.455 "flush": false, 00:10:48.455 "reset": true, 00:10:48.455 "nvme_admin": false, 00:10:48.455 "nvme_io": false, 00:10:48.455 "nvme_io_md": false, 00:10:48.455 "write_zeroes": true, 00:10:48.455 "zcopy": false, 00:10:48.455 "get_zone_info": false, 00:10:48.455 "zone_management": false, 00:10:48.455 "zone_append": false, 00:10:48.455 "compare": false, 00:10:48.455 "compare_and_write": false, 00:10:48.455 "abort": false, 00:10:48.455 "seek_hole": false, 00:10:48.455 "seek_data": false, 00:10:48.455 "copy": false, 00:10:48.455 "nvme_iov_md": false 00:10:48.455 }, 00:10:48.455 "memory_domains": [ 00:10:48.455 { 00:10:48.455 "dma_device_id": "system", 00:10:48.455 "dma_device_type": 1 00:10:48.455 }, 00:10:48.455 { 00:10:48.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.455 "dma_device_type": 2 00:10:48.455 }, 00:10:48.455 { 00:10:48.455 "dma_device_id": "system", 00:10:48.455 "dma_device_type": 1 00:10:48.455 }, 00:10:48.455 { 00:10:48.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.455 "dma_device_type": 2 00:10:48.455 }, 00:10:48.455 { 00:10:48.455 "dma_device_id": "system", 00:10:48.455 "dma_device_type": 1 00:10:48.455 }, 00:10:48.455 { 00:10:48.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.455 "dma_device_type": 2 00:10:48.455 }, 00:10:48.455 { 00:10:48.455 "dma_device_id": "system", 00:10:48.455 "dma_device_type": 1 00:10:48.455 }, 00:10:48.455 { 00:10:48.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.455 "dma_device_type": 2 00:10:48.455 } 00:10:48.455 ], 00:10:48.455 "driver_specific": { 00:10:48.455 "raid": { 00:10:48.455 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:48.455 "strip_size_kb": 0, 00:10:48.455 "state": "online", 00:10:48.455 "raid_level": "raid1", 00:10:48.455 "superblock": true, 00:10:48.455 "num_base_bdevs": 4, 00:10:48.455 "num_base_bdevs_discovered": 4, 00:10:48.455 "num_base_bdevs_operational": 4, 00:10:48.455 "base_bdevs_list": [ 00:10:48.455 { 00:10:48.455 "name": "pt1", 00:10:48.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.455 "is_configured": true, 00:10:48.455 "data_offset": 2048, 00:10:48.455 "data_size": 63488 00:10:48.455 }, 00:10:48.455 { 00:10:48.455 "name": "pt2", 00:10:48.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.455 "is_configured": true, 00:10:48.455 "data_offset": 2048, 00:10:48.455 "data_size": 63488 00:10:48.455 }, 00:10:48.455 { 00:10:48.455 "name": "pt3", 00:10:48.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.455 "is_configured": true, 00:10:48.455 "data_offset": 2048, 00:10:48.455 "data_size": 63488 00:10:48.455 }, 00:10:48.455 { 00:10:48.455 "name": "pt4", 00:10:48.455 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.455 "is_configured": true, 00:10:48.455 "data_offset": 2048, 00:10:48.455 "data_size": 63488 00:10:48.455 } 00:10:48.455 ] 00:10:48.455 } 00:10:48.455 } 00:10:48.455 }' 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:48.455 pt2 00:10:48.455 pt3 00:10:48.455 pt4' 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.455 12:54:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.455 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.715 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.715 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.715 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.715 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:48.715 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.715 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.715 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.715 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.716 [2024-11-26 12:54:06.203404] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=709f1621-8fc2-43be-ae07-9fc787359cc7 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 709f1621-8fc2-43be-ae07-9fc787359cc7 ']' 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.716 [2024-11-26 12:54:06.247041] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.716 [2024-11-26 12:54:06.247070] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.716 [2024-11-26 12:54:06.247148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.716 [2024-11-26 12:54:06.247265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.716 [2024-11-26 12:54:06.247276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.716 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.976 [2024-11-26 12:54:06.398806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:48.976 [2024-11-26 12:54:06.400748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:48.976 [2024-11-26 12:54:06.400840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:48.976 [2024-11-26 12:54:06.400888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:48.976 [2024-11-26 12:54:06.400956] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:48.976 [2024-11-26 12:54:06.401024] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:48.976 [2024-11-26 12:54:06.401102] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:48.976 [2024-11-26 12:54:06.401157] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:48.976 [2024-11-26 12:54:06.401221] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.976 [2024-11-26 12:54:06.401252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:48.976 request: 00:10:48.976 { 00:10:48.976 "name": "raid_bdev1", 00:10:48.976 "raid_level": "raid1", 00:10:48.976 "base_bdevs": [ 00:10:48.976 "malloc1", 00:10:48.976 "malloc2", 00:10:48.976 "malloc3", 00:10:48.976 "malloc4" 00:10:48.976 ], 00:10:48.976 "superblock": false, 00:10:48.976 "method": "bdev_raid_create", 00:10:48.976 "req_id": 1 00:10:48.976 } 00:10:48.976 Got JSON-RPC error response 00:10:48.976 response: 00:10:48.976 { 00:10:48.976 "code": -17, 00:10:48.976 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:48.976 } 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.976 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.977 [2024-11-26 12:54:06.470640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:48.977 [2024-11-26 12:54:06.470723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.977 [2024-11-26 12:54:06.470772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:48.977 [2024-11-26 12:54:06.470798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.977 [2024-11-26 12:54:06.472869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.977 [2024-11-26 12:54:06.472933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:48.977 [2024-11-26 12:54:06.473029] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:48.977 [2024-11-26 12:54:06.473094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:48.977 pt1 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.977 "name": "raid_bdev1", 00:10:48.977 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:48.977 "strip_size_kb": 0, 00:10:48.977 "state": "configuring", 00:10:48.977 "raid_level": "raid1", 00:10:48.977 "superblock": true, 00:10:48.977 "num_base_bdevs": 4, 00:10:48.977 "num_base_bdevs_discovered": 1, 00:10:48.977 "num_base_bdevs_operational": 4, 00:10:48.977 "base_bdevs_list": [ 00:10:48.977 { 00:10:48.977 "name": "pt1", 00:10:48.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.977 "is_configured": true, 00:10:48.977 "data_offset": 2048, 00:10:48.977 "data_size": 63488 00:10:48.977 }, 00:10:48.977 { 00:10:48.977 "name": null, 00:10:48.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.977 "is_configured": false, 00:10:48.977 "data_offset": 2048, 00:10:48.977 "data_size": 63488 00:10:48.977 }, 00:10:48.977 { 00:10:48.977 "name": null, 00:10:48.977 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.977 "is_configured": false, 00:10:48.977 "data_offset": 2048, 00:10:48.977 "data_size": 63488 00:10:48.977 }, 00:10:48.977 { 00:10:48.977 "name": null, 00:10:48.977 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.977 "is_configured": false, 00:10:48.977 "data_offset": 2048, 00:10:48.977 "data_size": 63488 00:10:48.977 } 00:10:48.977 ] 00:10:48.977 }' 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.977 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.237 [2024-11-26 12:54:06.893918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.237 [2024-11-26 12:54:06.894018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.237 [2024-11-26 12:54:06.894068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:49.237 [2024-11-26 12:54:06.894094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.237 [2024-11-26 12:54:06.894457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.237 [2024-11-26 12:54:06.894511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.237 [2024-11-26 12:54:06.894601] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:49.237 [2024-11-26 12:54:06.894651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.237 pt2 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.237 [2024-11-26 12:54:06.901921] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.237 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.238 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.238 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.238 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.238 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.238 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.238 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.238 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.238 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.238 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.238 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.497 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.498 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.498 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.498 "name": "raid_bdev1", 00:10:49.498 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:49.498 "strip_size_kb": 0, 00:10:49.498 "state": "configuring", 00:10:49.498 "raid_level": "raid1", 00:10:49.498 "superblock": true, 00:10:49.498 "num_base_bdevs": 4, 00:10:49.498 "num_base_bdevs_discovered": 1, 00:10:49.498 "num_base_bdevs_operational": 4, 00:10:49.498 "base_bdevs_list": [ 00:10:49.498 { 00:10:49.498 "name": "pt1", 00:10:49.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.498 "is_configured": true, 00:10:49.498 "data_offset": 2048, 00:10:49.498 "data_size": 63488 00:10:49.498 }, 00:10:49.498 { 00:10:49.498 "name": null, 00:10:49.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.498 "is_configured": false, 00:10:49.498 "data_offset": 0, 00:10:49.498 "data_size": 63488 00:10:49.498 }, 00:10:49.498 { 00:10:49.498 "name": null, 00:10:49.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.498 "is_configured": false, 00:10:49.498 "data_offset": 2048, 00:10:49.498 "data_size": 63488 00:10:49.498 }, 00:10:49.498 { 00:10:49.498 "name": null, 00:10:49.498 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.498 "is_configured": false, 00:10:49.498 "data_offset": 2048, 00:10:49.498 "data_size": 63488 00:10:49.498 } 00:10:49.498 ] 00:10:49.498 }' 00:10:49.498 12:54:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.498 12:54:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.758 [2024-11-26 12:54:07.373104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.758 [2024-11-26 12:54:07.373244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.758 [2024-11-26 12:54:07.373280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:49.758 [2024-11-26 12:54:07.373309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.758 [2024-11-26 12:54:07.373685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.758 [2024-11-26 12:54:07.373741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.758 [2024-11-26 12:54:07.373831] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:49.758 [2024-11-26 12:54:07.373879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.758 pt2 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.758 [2024-11-26 12:54:07.385041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:49.758 [2024-11-26 12:54:07.385152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.758 [2024-11-26 12:54:07.385185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:49.758 [2024-11-26 12:54:07.385224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.758 [2024-11-26 12:54:07.385540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.758 [2024-11-26 12:54:07.385595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:49.758 [2024-11-26 12:54:07.385671] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:49.758 [2024-11-26 12:54:07.385715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:49.758 pt3 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.758 [2024-11-26 12:54:07.397032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:49.758 [2024-11-26 12:54:07.397076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.758 [2024-11-26 12:54:07.397105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:49.758 [2024-11-26 12:54:07.397114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.758 [2024-11-26 12:54:07.397404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.758 [2024-11-26 12:54:07.397422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:49.758 [2024-11-26 12:54:07.397468] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:49.758 [2024-11-26 12:54:07.397485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:49.758 [2024-11-26 12:54:07.397606] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:49.758 [2024-11-26 12:54:07.397626] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:49.758 [2024-11-26 12:54:07.397855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:49.758 [2024-11-26 12:54:07.397976] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:49.758 [2024-11-26 12:54:07.397985] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:49.758 [2024-11-26 12:54:07.398079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.758 pt4 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.758 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.018 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.018 "name": "raid_bdev1", 00:10:50.018 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:50.018 "strip_size_kb": 0, 00:10:50.018 "state": "online", 00:10:50.018 "raid_level": "raid1", 00:10:50.019 "superblock": true, 00:10:50.019 "num_base_bdevs": 4, 00:10:50.019 "num_base_bdevs_discovered": 4, 00:10:50.019 "num_base_bdevs_operational": 4, 00:10:50.019 "base_bdevs_list": [ 00:10:50.019 { 00:10:50.019 "name": "pt1", 00:10:50.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.019 "is_configured": true, 00:10:50.019 "data_offset": 2048, 00:10:50.019 "data_size": 63488 00:10:50.019 }, 00:10:50.019 { 00:10:50.019 "name": "pt2", 00:10:50.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.019 "is_configured": true, 00:10:50.019 "data_offset": 2048, 00:10:50.019 "data_size": 63488 00:10:50.019 }, 00:10:50.019 { 00:10:50.019 "name": "pt3", 00:10:50.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.019 "is_configured": true, 00:10:50.019 "data_offset": 2048, 00:10:50.019 "data_size": 63488 00:10:50.019 }, 00:10:50.019 { 00:10:50.019 "name": "pt4", 00:10:50.019 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.019 "is_configured": true, 00:10:50.019 "data_offset": 2048, 00:10:50.019 "data_size": 63488 00:10:50.019 } 00:10:50.019 ] 00:10:50.019 }' 00:10:50.019 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.019 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.279 [2024-11-26 12:54:07.840606] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.279 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.279 "name": "raid_bdev1", 00:10:50.279 "aliases": [ 00:10:50.279 "709f1621-8fc2-43be-ae07-9fc787359cc7" 00:10:50.279 ], 00:10:50.279 "product_name": "Raid Volume", 00:10:50.279 "block_size": 512, 00:10:50.279 "num_blocks": 63488, 00:10:50.279 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:50.279 "assigned_rate_limits": { 00:10:50.279 "rw_ios_per_sec": 0, 00:10:50.279 "rw_mbytes_per_sec": 0, 00:10:50.279 "r_mbytes_per_sec": 0, 00:10:50.279 "w_mbytes_per_sec": 0 00:10:50.279 }, 00:10:50.279 "claimed": false, 00:10:50.279 "zoned": false, 00:10:50.279 "supported_io_types": { 00:10:50.279 "read": true, 00:10:50.279 "write": true, 00:10:50.279 "unmap": false, 00:10:50.279 "flush": false, 00:10:50.279 "reset": true, 00:10:50.279 "nvme_admin": false, 00:10:50.279 "nvme_io": false, 00:10:50.279 "nvme_io_md": false, 00:10:50.279 "write_zeroes": true, 00:10:50.279 "zcopy": false, 00:10:50.279 "get_zone_info": false, 00:10:50.279 "zone_management": false, 00:10:50.279 "zone_append": false, 00:10:50.279 "compare": false, 00:10:50.279 "compare_and_write": false, 00:10:50.279 "abort": false, 00:10:50.279 "seek_hole": false, 00:10:50.279 "seek_data": false, 00:10:50.279 "copy": false, 00:10:50.279 "nvme_iov_md": false 00:10:50.279 }, 00:10:50.279 "memory_domains": [ 00:10:50.279 { 00:10:50.279 "dma_device_id": "system", 00:10:50.279 "dma_device_type": 1 00:10:50.279 }, 00:10:50.279 { 00:10:50.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.279 "dma_device_type": 2 00:10:50.279 }, 00:10:50.279 { 00:10:50.279 "dma_device_id": "system", 00:10:50.279 "dma_device_type": 1 00:10:50.279 }, 00:10:50.279 { 00:10:50.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.279 "dma_device_type": 2 00:10:50.279 }, 00:10:50.279 { 00:10:50.279 "dma_device_id": "system", 00:10:50.279 "dma_device_type": 1 00:10:50.279 }, 00:10:50.279 { 00:10:50.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.279 "dma_device_type": 2 00:10:50.279 }, 00:10:50.279 { 00:10:50.279 "dma_device_id": "system", 00:10:50.279 "dma_device_type": 1 00:10:50.279 }, 00:10:50.279 { 00:10:50.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.279 "dma_device_type": 2 00:10:50.279 } 00:10:50.279 ], 00:10:50.279 "driver_specific": { 00:10:50.279 "raid": { 00:10:50.279 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:50.280 "strip_size_kb": 0, 00:10:50.280 "state": "online", 00:10:50.280 "raid_level": "raid1", 00:10:50.280 "superblock": true, 00:10:50.280 "num_base_bdevs": 4, 00:10:50.280 "num_base_bdevs_discovered": 4, 00:10:50.280 "num_base_bdevs_operational": 4, 00:10:50.280 "base_bdevs_list": [ 00:10:50.280 { 00:10:50.280 "name": "pt1", 00:10:50.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.280 "is_configured": true, 00:10:50.280 "data_offset": 2048, 00:10:50.280 "data_size": 63488 00:10:50.280 }, 00:10:50.280 { 00:10:50.280 "name": "pt2", 00:10:50.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.280 "is_configured": true, 00:10:50.280 "data_offset": 2048, 00:10:50.280 "data_size": 63488 00:10:50.280 }, 00:10:50.280 { 00:10:50.280 "name": "pt3", 00:10:50.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.280 "is_configured": true, 00:10:50.280 "data_offset": 2048, 00:10:50.280 "data_size": 63488 00:10:50.280 }, 00:10:50.280 { 00:10:50.280 "name": "pt4", 00:10:50.280 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.280 "is_configured": true, 00:10:50.280 "data_offset": 2048, 00:10:50.280 "data_size": 63488 00:10:50.280 } 00:10:50.280 ] 00:10:50.280 } 00:10:50.280 } 00:10:50.280 }' 00:10:50.280 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.280 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:50.280 pt2 00:10:50.280 pt3 00:10:50.280 pt4' 00:10:50.280 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.280 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.280 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.280 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:50.280 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.280 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.280 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.280 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.540 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.540 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.540 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.540 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:50.540 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.540 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.540 12:54:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.540 12:54:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.540 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.540 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.540 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.540 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:50.540 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.540 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.540 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.540 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.540 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.540 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:50.541 [2024-11-26 12:54:08.136057] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 709f1621-8fc2-43be-ae07-9fc787359cc7 '!=' 709f1621-8fc2-43be-ae07-9fc787359cc7 ']' 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.541 [2024-11-26 12:54:08.183736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.541 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.801 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.801 "name": "raid_bdev1", 00:10:50.801 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:50.801 "strip_size_kb": 0, 00:10:50.801 "state": "online", 00:10:50.801 "raid_level": "raid1", 00:10:50.801 "superblock": true, 00:10:50.801 "num_base_bdevs": 4, 00:10:50.801 "num_base_bdevs_discovered": 3, 00:10:50.801 "num_base_bdevs_operational": 3, 00:10:50.801 "base_bdevs_list": [ 00:10:50.801 { 00:10:50.801 "name": null, 00:10:50.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.801 "is_configured": false, 00:10:50.801 "data_offset": 0, 00:10:50.801 "data_size": 63488 00:10:50.801 }, 00:10:50.801 { 00:10:50.801 "name": "pt2", 00:10:50.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.801 "is_configured": true, 00:10:50.801 "data_offset": 2048, 00:10:50.801 "data_size": 63488 00:10:50.801 }, 00:10:50.801 { 00:10:50.801 "name": "pt3", 00:10:50.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.801 "is_configured": true, 00:10:50.801 "data_offset": 2048, 00:10:50.801 "data_size": 63488 00:10:50.801 }, 00:10:50.801 { 00:10:50.801 "name": "pt4", 00:10:50.801 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.801 "is_configured": true, 00:10:50.801 "data_offset": 2048, 00:10:50.801 "data_size": 63488 00:10:50.801 } 00:10:50.801 ] 00:10:50.801 }' 00:10:50.801 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.801 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.061 [2024-11-26 12:54:08.610969] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.061 [2024-11-26 12:54:08.611043] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.061 [2024-11-26 12:54:08.611127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.061 [2024-11-26 12:54:08.611236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.061 [2024-11-26 12:54:08.611295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:51.061 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.062 [2024-11-26 12:54:08.682848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.062 [2024-11-26 12:54:08.682921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.062 [2024-11-26 12:54:08.682938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:51.062 [2024-11-26 12:54:08.682955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.062 [2024-11-26 12:54:08.685118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.062 [2024-11-26 12:54:08.685160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.062 [2024-11-26 12:54:08.685235] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:51.062 [2024-11-26 12:54:08.685265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.062 pt2 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.062 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.322 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.322 "name": "raid_bdev1", 00:10:51.322 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:51.322 "strip_size_kb": 0, 00:10:51.322 "state": "configuring", 00:10:51.322 "raid_level": "raid1", 00:10:51.322 "superblock": true, 00:10:51.322 "num_base_bdevs": 4, 00:10:51.322 "num_base_bdevs_discovered": 1, 00:10:51.322 "num_base_bdevs_operational": 3, 00:10:51.322 "base_bdevs_list": [ 00:10:51.322 { 00:10:51.322 "name": null, 00:10:51.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.322 "is_configured": false, 00:10:51.322 "data_offset": 2048, 00:10:51.322 "data_size": 63488 00:10:51.322 }, 00:10:51.322 { 00:10:51.322 "name": "pt2", 00:10:51.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.322 "is_configured": true, 00:10:51.322 "data_offset": 2048, 00:10:51.322 "data_size": 63488 00:10:51.322 }, 00:10:51.322 { 00:10:51.322 "name": null, 00:10:51.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.322 "is_configured": false, 00:10:51.322 "data_offset": 2048, 00:10:51.322 "data_size": 63488 00:10:51.322 }, 00:10:51.322 { 00:10:51.322 "name": null, 00:10:51.322 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.322 "is_configured": false, 00:10:51.322 "data_offset": 2048, 00:10:51.322 "data_size": 63488 00:10:51.322 } 00:10:51.322 ] 00:10:51.322 }' 00:10:51.322 12:54:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.322 12:54:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.582 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:51.582 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:51.582 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:51.582 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.582 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.582 [2024-11-26 12:54:09.078227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:51.582 [2024-11-26 12:54:09.078327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.582 [2024-11-26 12:54:09.078376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:51.582 [2024-11-26 12:54:09.078407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.582 [2024-11-26 12:54:09.078766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.582 [2024-11-26 12:54:09.078822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:51.582 [2024-11-26 12:54:09.078908] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:51.582 [2024-11-26 12:54:09.078956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:51.582 pt3 00:10:51.582 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.582 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:51.582 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.582 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.583 "name": "raid_bdev1", 00:10:51.583 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:51.583 "strip_size_kb": 0, 00:10:51.583 "state": "configuring", 00:10:51.583 "raid_level": "raid1", 00:10:51.583 "superblock": true, 00:10:51.583 "num_base_bdevs": 4, 00:10:51.583 "num_base_bdevs_discovered": 2, 00:10:51.583 "num_base_bdevs_operational": 3, 00:10:51.583 "base_bdevs_list": [ 00:10:51.583 { 00:10:51.583 "name": null, 00:10:51.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.583 "is_configured": false, 00:10:51.583 "data_offset": 2048, 00:10:51.583 "data_size": 63488 00:10:51.583 }, 00:10:51.583 { 00:10:51.583 "name": "pt2", 00:10:51.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.583 "is_configured": true, 00:10:51.583 "data_offset": 2048, 00:10:51.583 "data_size": 63488 00:10:51.583 }, 00:10:51.583 { 00:10:51.583 "name": "pt3", 00:10:51.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.583 "is_configured": true, 00:10:51.583 "data_offset": 2048, 00:10:51.583 "data_size": 63488 00:10:51.583 }, 00:10:51.583 { 00:10:51.583 "name": null, 00:10:51.583 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.583 "is_configured": false, 00:10:51.583 "data_offset": 2048, 00:10:51.583 "data_size": 63488 00:10:51.583 } 00:10:51.583 ] 00:10:51.583 }' 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.583 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.842 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:51.842 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:51.842 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:51.842 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:51.842 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.842 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.842 [2024-11-26 12:54:09.493480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:51.842 [2024-11-26 12:54:09.493559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.842 [2024-11-26 12:54:09.493579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:10:51.842 [2024-11-26 12:54:09.493589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.842 [2024-11-26 12:54:09.493934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.842 [2024-11-26 12:54:09.493951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:51.842 [2024-11-26 12:54:09.494016] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:51.842 [2024-11-26 12:54:09.494046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:51.843 [2024-11-26 12:54:09.494141] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:51.843 [2024-11-26 12:54:09.494151] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:51.843 [2024-11-26 12:54:09.494379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:51.843 [2024-11-26 12:54:09.494497] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:51.843 [2024-11-26 12:54:09.494506] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:51.843 [2024-11-26 12:54:09.494610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.843 pt4 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.843 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.103 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.103 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.103 "name": "raid_bdev1", 00:10:52.103 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:52.103 "strip_size_kb": 0, 00:10:52.103 "state": "online", 00:10:52.103 "raid_level": "raid1", 00:10:52.103 "superblock": true, 00:10:52.103 "num_base_bdevs": 4, 00:10:52.103 "num_base_bdevs_discovered": 3, 00:10:52.103 "num_base_bdevs_operational": 3, 00:10:52.103 "base_bdevs_list": [ 00:10:52.103 { 00:10:52.103 "name": null, 00:10:52.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.103 "is_configured": false, 00:10:52.103 "data_offset": 2048, 00:10:52.103 "data_size": 63488 00:10:52.103 }, 00:10:52.103 { 00:10:52.103 "name": "pt2", 00:10:52.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.103 "is_configured": true, 00:10:52.103 "data_offset": 2048, 00:10:52.103 "data_size": 63488 00:10:52.103 }, 00:10:52.103 { 00:10:52.103 "name": "pt3", 00:10:52.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.103 "is_configured": true, 00:10:52.103 "data_offset": 2048, 00:10:52.103 "data_size": 63488 00:10:52.103 }, 00:10:52.103 { 00:10:52.103 "name": "pt4", 00:10:52.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.103 "is_configured": true, 00:10:52.103 "data_offset": 2048, 00:10:52.103 "data_size": 63488 00:10:52.103 } 00:10:52.103 ] 00:10:52.103 }' 00:10:52.103 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.103 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.364 [2024-11-26 12:54:09.936712] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.364 [2024-11-26 12:54:09.936808] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.364 [2024-11-26 12:54:09.936906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.364 [2024-11-26 12:54:09.936989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.364 [2024-11-26 12:54:09.937022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.364 12:54:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.364 [2024-11-26 12:54:10.012601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:52.364 [2024-11-26 12:54:10.012713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.364 [2024-11-26 12:54:10.012752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:10:52.364 [2024-11-26 12:54:10.012780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.364 [2024-11-26 12:54:10.014909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.364 [2024-11-26 12:54:10.014991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:52.364 [2024-11-26 12:54:10.015073] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:52.364 [2024-11-26 12:54:10.015127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:52.364 [2024-11-26 12:54:10.015262] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:52.364 [2024-11-26 12:54:10.015325] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.364 [2024-11-26 12:54:10.015378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:10:52.364 [2024-11-26 12:54:10.015454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.364 [2024-11-26 12:54:10.015593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:52.364 pt1 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.364 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.625 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.625 "name": "raid_bdev1", 00:10:52.625 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:52.625 "strip_size_kb": 0, 00:10:52.625 "state": "configuring", 00:10:52.625 "raid_level": "raid1", 00:10:52.625 "superblock": true, 00:10:52.625 "num_base_bdevs": 4, 00:10:52.625 "num_base_bdevs_discovered": 2, 00:10:52.625 "num_base_bdevs_operational": 3, 00:10:52.625 "base_bdevs_list": [ 00:10:52.625 { 00:10:52.625 "name": null, 00:10:52.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.625 "is_configured": false, 00:10:52.625 "data_offset": 2048, 00:10:52.625 "data_size": 63488 00:10:52.625 }, 00:10:52.625 { 00:10:52.625 "name": "pt2", 00:10:52.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.625 "is_configured": true, 00:10:52.625 "data_offset": 2048, 00:10:52.625 "data_size": 63488 00:10:52.625 }, 00:10:52.625 { 00:10:52.625 "name": "pt3", 00:10:52.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.625 "is_configured": true, 00:10:52.625 "data_offset": 2048, 00:10:52.625 "data_size": 63488 00:10:52.625 }, 00:10:52.625 { 00:10:52.625 "name": null, 00:10:52.625 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.625 "is_configured": false, 00:10:52.625 "data_offset": 2048, 00:10:52.625 "data_size": 63488 00:10:52.625 } 00:10:52.625 ] 00:10:52.625 }' 00:10:52.625 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.625 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.886 [2024-11-26 12:54:10.523714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:52.886 [2024-11-26 12:54:10.523772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.886 [2024-11-26 12:54:10.523790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:10:52.886 [2024-11-26 12:54:10.523801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.886 [2024-11-26 12:54:10.524205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.886 [2024-11-26 12:54:10.524243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:52.886 [2024-11-26 12:54:10.524309] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:52.886 [2024-11-26 12:54:10.524332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:52.886 [2024-11-26 12:54:10.524430] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:52.886 [2024-11-26 12:54:10.524444] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:52.886 [2024-11-26 12:54:10.524690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:52.886 [2024-11-26 12:54:10.524801] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:52.886 [2024-11-26 12:54:10.524814] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:52.886 [2024-11-26 12:54:10.524914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.886 pt4 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.886 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.146 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.146 "name": "raid_bdev1", 00:10:53.146 "uuid": "709f1621-8fc2-43be-ae07-9fc787359cc7", 00:10:53.146 "strip_size_kb": 0, 00:10:53.146 "state": "online", 00:10:53.146 "raid_level": "raid1", 00:10:53.146 "superblock": true, 00:10:53.146 "num_base_bdevs": 4, 00:10:53.146 "num_base_bdevs_discovered": 3, 00:10:53.146 "num_base_bdevs_operational": 3, 00:10:53.146 "base_bdevs_list": [ 00:10:53.146 { 00:10:53.146 "name": null, 00:10:53.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.146 "is_configured": false, 00:10:53.146 "data_offset": 2048, 00:10:53.146 "data_size": 63488 00:10:53.146 }, 00:10:53.146 { 00:10:53.146 "name": "pt2", 00:10:53.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.146 "is_configured": true, 00:10:53.146 "data_offset": 2048, 00:10:53.146 "data_size": 63488 00:10:53.146 }, 00:10:53.146 { 00:10:53.146 "name": "pt3", 00:10:53.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.146 "is_configured": true, 00:10:53.146 "data_offset": 2048, 00:10:53.146 "data_size": 63488 00:10:53.146 }, 00:10:53.146 { 00:10:53.146 "name": "pt4", 00:10:53.146 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.146 "is_configured": true, 00:10:53.146 "data_offset": 2048, 00:10:53.146 "data_size": 63488 00:10:53.146 } 00:10:53.146 ] 00:10:53.146 }' 00:10:53.146 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.146 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.406 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:53.406 12:54:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:53.406 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.406 12:54:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:53.406 [2024-11-26 12:54:11.035102] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 709f1621-8fc2-43be-ae07-9fc787359cc7 '!=' 709f1621-8fc2-43be-ae07-9fc787359cc7 ']' 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85488 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85488 ']' 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85488 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.406 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85488 00:10:53.666 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:53.666 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:53.667 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85488' 00:10:53.667 killing process with pid 85488 00:10:53.667 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85488 00:10:53.667 [2024-11-26 12:54:11.113752] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.667 [2024-11-26 12:54:11.113833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.667 [2024-11-26 12:54:11.113902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.667 [2024-11-26 12:54:11.113913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:53.667 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85488 00:10:53.667 [2024-11-26 12:54:11.156136] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.928 12:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:53.928 00:10:53.928 real 0m7.001s 00:10:53.928 user 0m11.795s 00:10:53.928 sys 0m1.435s 00:10:53.928 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.928 12:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.928 ************************************ 00:10:53.928 END TEST raid_superblock_test 00:10:53.928 ************************************ 00:10:53.928 12:54:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:53.928 12:54:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:53.928 12:54:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.928 12:54:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.928 ************************************ 00:10:53.928 START TEST raid_read_error_test 00:10:53.928 ************************************ 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.B8ypadcMXO 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85964 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85964 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85964 ']' 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.928 12:54:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.928 [2024-11-26 12:54:11.580550] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:53.928 [2024-11-26 12:54:11.580785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85964 ] 00:10:54.188 [2024-11-26 12:54:11.741276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.188 [2024-11-26 12:54:11.785735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.188 [2024-11-26 12:54:11.828111] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.188 [2024-11-26 12:54:11.828245] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.759 BaseBdev1_malloc 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.759 true 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.759 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.759 [2024-11-26 12:54:12.430191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:54.759 [2024-11-26 12:54:12.430259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.759 [2024-11-26 12:54:12.430281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:54.759 [2024-11-26 12:54:12.430290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.759 [2024-11-26 12:54:12.432437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.759 [2024-11-26 12:54:12.432473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:55.020 BaseBdev1 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.020 BaseBdev2_malloc 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.020 true 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.020 [2024-11-26 12:54:12.478891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:55.020 [2024-11-26 12:54:12.478997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.020 [2024-11-26 12:54:12.479035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:55.020 [2024-11-26 12:54:12.479043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.020 [2024-11-26 12:54:12.481009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.020 [2024-11-26 12:54:12.481045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.020 BaseBdev2 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.020 BaseBdev3_malloc 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.020 true 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.020 [2024-11-26 12:54:12.519422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:55.020 [2024-11-26 12:54:12.519466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.020 [2024-11-26 12:54:12.519485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:55.020 [2024-11-26 12:54:12.519492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.020 [2024-11-26 12:54:12.521476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.020 [2024-11-26 12:54:12.521509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:55.020 BaseBdev3 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.020 BaseBdev4_malloc 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.020 true 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.020 [2024-11-26 12:54:12.559745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:55.020 [2024-11-26 12:54:12.559792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.020 [2024-11-26 12:54:12.559813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:55.020 [2024-11-26 12:54:12.559820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.020 [2024-11-26 12:54:12.561798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.020 [2024-11-26 12:54:12.561835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:55.020 BaseBdev4 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.020 [2024-11-26 12:54:12.571774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.020 [2024-11-26 12:54:12.573546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.020 [2024-11-26 12:54:12.573631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.020 [2024-11-26 12:54:12.573683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.020 [2024-11-26 12:54:12.573872] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:55.020 [2024-11-26 12:54:12.573883] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:55.020 [2024-11-26 12:54:12.574135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:55.020 [2024-11-26 12:54:12.574272] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:55.020 [2024-11-26 12:54:12.574287] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:55.020 [2024-11-26 12:54:12.574396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.020 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.021 "name": "raid_bdev1", 00:10:55.021 "uuid": "21f1e67a-56dc-45a7-a63e-26bafd605f4a", 00:10:55.021 "strip_size_kb": 0, 00:10:55.021 "state": "online", 00:10:55.021 "raid_level": "raid1", 00:10:55.021 "superblock": true, 00:10:55.021 "num_base_bdevs": 4, 00:10:55.021 "num_base_bdevs_discovered": 4, 00:10:55.021 "num_base_bdevs_operational": 4, 00:10:55.021 "base_bdevs_list": [ 00:10:55.021 { 00:10:55.021 "name": "BaseBdev1", 00:10:55.021 "uuid": "54a6feab-7554-5c1c-aab2-054908b6e7c6", 00:10:55.021 "is_configured": true, 00:10:55.021 "data_offset": 2048, 00:10:55.021 "data_size": 63488 00:10:55.021 }, 00:10:55.021 { 00:10:55.021 "name": "BaseBdev2", 00:10:55.021 "uuid": "340be4a1-84a2-50f9-9ade-7604c697ce20", 00:10:55.021 "is_configured": true, 00:10:55.021 "data_offset": 2048, 00:10:55.021 "data_size": 63488 00:10:55.021 }, 00:10:55.021 { 00:10:55.021 "name": "BaseBdev3", 00:10:55.021 "uuid": "0e701da5-4cf8-51ea-99ec-b2772d901c2a", 00:10:55.021 "is_configured": true, 00:10:55.021 "data_offset": 2048, 00:10:55.021 "data_size": 63488 00:10:55.021 }, 00:10:55.021 { 00:10:55.021 "name": "BaseBdev4", 00:10:55.021 "uuid": "d4646e5b-d392-5bf9-abea-9deaac7da4d6", 00:10:55.021 "is_configured": true, 00:10:55.021 "data_offset": 2048, 00:10:55.021 "data_size": 63488 00:10:55.021 } 00:10:55.021 ] 00:10:55.021 }' 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.021 12:54:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.591 12:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:55.591 12:54:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:55.591 [2024-11-26 12:54:13.099255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.532 "name": "raid_bdev1", 00:10:56.532 "uuid": "21f1e67a-56dc-45a7-a63e-26bafd605f4a", 00:10:56.532 "strip_size_kb": 0, 00:10:56.532 "state": "online", 00:10:56.532 "raid_level": "raid1", 00:10:56.532 "superblock": true, 00:10:56.532 "num_base_bdevs": 4, 00:10:56.532 "num_base_bdevs_discovered": 4, 00:10:56.532 "num_base_bdevs_operational": 4, 00:10:56.532 "base_bdevs_list": [ 00:10:56.532 { 00:10:56.532 "name": "BaseBdev1", 00:10:56.532 "uuid": "54a6feab-7554-5c1c-aab2-054908b6e7c6", 00:10:56.532 "is_configured": true, 00:10:56.532 "data_offset": 2048, 00:10:56.532 "data_size": 63488 00:10:56.532 }, 00:10:56.532 { 00:10:56.532 "name": "BaseBdev2", 00:10:56.532 "uuid": "340be4a1-84a2-50f9-9ade-7604c697ce20", 00:10:56.532 "is_configured": true, 00:10:56.532 "data_offset": 2048, 00:10:56.532 "data_size": 63488 00:10:56.532 }, 00:10:56.532 { 00:10:56.532 "name": "BaseBdev3", 00:10:56.532 "uuid": "0e701da5-4cf8-51ea-99ec-b2772d901c2a", 00:10:56.532 "is_configured": true, 00:10:56.532 "data_offset": 2048, 00:10:56.532 "data_size": 63488 00:10:56.532 }, 00:10:56.532 { 00:10:56.532 "name": "BaseBdev4", 00:10:56.532 "uuid": "d4646e5b-d392-5bf9-abea-9deaac7da4d6", 00:10:56.532 "is_configured": true, 00:10:56.532 "data_offset": 2048, 00:10:56.532 "data_size": 63488 00:10:56.532 } 00:10:56.532 ] 00:10:56.532 }' 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.532 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.793 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:56.793 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.793 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.793 [2024-11-26 12:54:14.455161] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.793 [2024-11-26 12:54:14.455291] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.793 [2024-11-26 12:54:14.457836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.793 [2024-11-26 12:54:14.457925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.793 [2024-11-26 12:54:14.458078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.793 [2024-11-26 12:54:14.458125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:56.793 { 00:10:56.793 "results": [ 00:10:56.793 { 00:10:56.793 "job": "raid_bdev1", 00:10:56.793 "core_mask": "0x1", 00:10:56.793 "workload": "randrw", 00:10:56.793 "percentage": 50, 00:10:56.793 "status": "finished", 00:10:56.793 "queue_depth": 1, 00:10:56.793 "io_size": 131072, 00:10:56.793 "runtime": 1.357, 00:10:56.793 "iops": 12114.959469417834, 00:10:56.793 "mibps": 1514.3699336772293, 00:10:56.793 "io_failed": 0, 00:10:56.793 "io_timeout": 0, 00:10:56.793 "avg_latency_us": 80.15212741316844, 00:10:56.793 "min_latency_us": 21.799126637554586, 00:10:56.793 "max_latency_us": 1366.5257641921398 00:10:56.793 } 00:10:56.793 ], 00:10:56.793 "core_count": 1 00:10:56.793 } 00:10:56.793 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.793 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85964 00:10:56.793 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85964 ']' 00:10:56.793 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85964 00:10:56.793 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:56.793 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.053 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85964 00:10:57.053 killing process with pid 85964 00:10:57.053 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:57.053 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:57.053 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85964' 00:10:57.053 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85964 00:10:57.053 [2024-11-26 12:54:14.496063] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.053 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85964 00:10:57.053 [2024-11-26 12:54:14.529847] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.313 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.B8ypadcMXO 00:10:57.313 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:57.313 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:57.313 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:57.313 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:57.313 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.313 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:57.313 12:54:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:57.313 00:10:57.313 real 0m3.290s 00:10:57.313 user 0m4.132s 00:10:57.313 sys 0m0.523s 00:10:57.313 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.313 12:54:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 ************************************ 00:10:57.313 END TEST raid_read_error_test 00:10:57.313 ************************************ 00:10:57.313 12:54:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:57.313 12:54:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:57.313 12:54:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.313 12:54:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 ************************************ 00:10:57.313 START TEST raid_write_error_test 00:10:57.313 ************************************ 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VdYF3KqiTa 00:10:57.313 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86093 00:10:57.314 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:57.314 12:54:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86093 00:10:57.314 12:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 86093 ']' 00:10:57.314 12:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.314 12:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.314 12:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.314 12:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.314 12:54:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.314 [2024-11-26 12:54:14.948887] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:57.314 [2024-11-26 12:54:14.949092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86093 ] 00:10:57.574 [2024-11-26 12:54:15.101053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.574 [2024-11-26 12:54:15.144800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.574 [2024-11-26 12:54:15.186818] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.574 [2024-11-26 12:54:15.186937] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.143 BaseBdev1_malloc 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.143 true 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.143 [2024-11-26 12:54:15.800682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:58.143 [2024-11-26 12:54:15.800750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.143 [2024-11-26 12:54:15.800792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:58.143 [2024-11-26 12:54:15.800800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.143 [2024-11-26 12:54:15.802840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.143 [2024-11-26 12:54:15.802936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.143 BaseBdev1 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.143 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.404 BaseBdev2_malloc 00:10:58.404 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 true 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 [2024-11-26 12:54:15.857763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:58.405 [2024-11-26 12:54:15.857828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.405 [2024-11-26 12:54:15.857852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:58.405 [2024-11-26 12:54:15.857862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.405 [2024-11-26 12:54:15.860573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.405 [2024-11-26 12:54:15.860617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:58.405 BaseBdev2 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 BaseBdev3_malloc 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 true 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 [2024-11-26 12:54:15.898097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:58.405 [2024-11-26 12:54:15.898143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.405 [2024-11-26 12:54:15.898177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:58.405 [2024-11-26 12:54:15.898185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.405 [2024-11-26 12:54:15.900217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.405 [2024-11-26 12:54:15.900252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:58.405 BaseBdev3 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 BaseBdev4_malloc 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 true 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 [2024-11-26 12:54:15.938535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:58.405 [2024-11-26 12:54:15.938581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.405 [2024-11-26 12:54:15.938616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:58.405 [2024-11-26 12:54:15.938625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.405 [2024-11-26 12:54:15.940643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.405 [2024-11-26 12:54:15.940758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:58.405 BaseBdev4 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 [2024-11-26 12:54:15.950564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.405 [2024-11-26 12:54:15.952369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.405 [2024-11-26 12:54:15.952452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.405 [2024-11-26 12:54:15.952504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.405 [2024-11-26 12:54:15.952689] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:58.405 [2024-11-26 12:54:15.952701] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:58.405 [2024-11-26 12:54:15.952963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:58.405 [2024-11-26 12:54:15.953101] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:58.405 [2024-11-26 12:54:15.953113] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:58.405 [2024-11-26 12:54:15.953249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.405 12:54:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.405 12:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.405 "name": "raid_bdev1", 00:10:58.405 "uuid": "bfd9d497-dbc3-41e8-b09c-3b0063536b49", 00:10:58.405 "strip_size_kb": 0, 00:10:58.405 "state": "online", 00:10:58.405 "raid_level": "raid1", 00:10:58.405 "superblock": true, 00:10:58.405 "num_base_bdevs": 4, 00:10:58.406 "num_base_bdevs_discovered": 4, 00:10:58.406 "num_base_bdevs_operational": 4, 00:10:58.406 "base_bdevs_list": [ 00:10:58.406 { 00:10:58.406 "name": "BaseBdev1", 00:10:58.406 "uuid": "b9cf587f-ab6a-5789-94e0-3e8d16fe739e", 00:10:58.406 "is_configured": true, 00:10:58.406 "data_offset": 2048, 00:10:58.406 "data_size": 63488 00:10:58.406 }, 00:10:58.406 { 00:10:58.406 "name": "BaseBdev2", 00:10:58.406 "uuid": "33d4922b-440b-5df5-8f9b-e855567036ee", 00:10:58.406 "is_configured": true, 00:10:58.406 "data_offset": 2048, 00:10:58.406 "data_size": 63488 00:10:58.406 }, 00:10:58.406 { 00:10:58.406 "name": "BaseBdev3", 00:10:58.406 "uuid": "a444f8af-52e4-5f29-b101-e6e02bf14623", 00:10:58.406 "is_configured": true, 00:10:58.406 "data_offset": 2048, 00:10:58.406 "data_size": 63488 00:10:58.406 }, 00:10:58.406 { 00:10:58.406 "name": "BaseBdev4", 00:10:58.406 "uuid": "6f6a2326-0016-5b30-997f-9cc61d428e32", 00:10:58.406 "is_configured": true, 00:10:58.406 "data_offset": 2048, 00:10:58.406 "data_size": 63488 00:10:58.406 } 00:10:58.406 ] 00:10:58.406 }' 00:10:58.406 12:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.406 12:54:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.976 12:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:58.976 12:54:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:58.976 [2024-11-26 12:54:16.446041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.925 [2024-11-26 12:54:17.371704] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:59.925 [2024-11-26 12:54:17.371848] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.925 [2024-11-26 12:54:17.372127] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.925 "name": "raid_bdev1", 00:10:59.925 "uuid": "bfd9d497-dbc3-41e8-b09c-3b0063536b49", 00:10:59.925 "strip_size_kb": 0, 00:10:59.925 "state": "online", 00:10:59.925 "raid_level": "raid1", 00:10:59.925 "superblock": true, 00:10:59.925 "num_base_bdevs": 4, 00:10:59.925 "num_base_bdevs_discovered": 3, 00:10:59.925 "num_base_bdevs_operational": 3, 00:10:59.925 "base_bdevs_list": [ 00:10:59.925 { 00:10:59.925 "name": null, 00:10:59.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.925 "is_configured": false, 00:10:59.925 "data_offset": 0, 00:10:59.925 "data_size": 63488 00:10:59.925 }, 00:10:59.925 { 00:10:59.925 "name": "BaseBdev2", 00:10:59.925 "uuid": "33d4922b-440b-5df5-8f9b-e855567036ee", 00:10:59.925 "is_configured": true, 00:10:59.925 "data_offset": 2048, 00:10:59.925 "data_size": 63488 00:10:59.925 }, 00:10:59.925 { 00:10:59.925 "name": "BaseBdev3", 00:10:59.925 "uuid": "a444f8af-52e4-5f29-b101-e6e02bf14623", 00:10:59.925 "is_configured": true, 00:10:59.925 "data_offset": 2048, 00:10:59.925 "data_size": 63488 00:10:59.925 }, 00:10:59.925 { 00:10:59.925 "name": "BaseBdev4", 00:10:59.925 "uuid": "6f6a2326-0016-5b30-997f-9cc61d428e32", 00:10:59.925 "is_configured": true, 00:10:59.925 "data_offset": 2048, 00:10:59.925 "data_size": 63488 00:10:59.925 } 00:10:59.925 ] 00:10:59.925 }' 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.925 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.200 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.200 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.200 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.200 [2024-11-26 12:54:17.819458] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.200 [2024-11-26 12:54:17.819499] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.200 [2024-11-26 12:54:17.821912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.200 [2024-11-26 12:54:17.821964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.200 [2024-11-26 12:54:17.822057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.200 [2024-11-26 12:54:17.822068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:00.200 { 00:11:00.200 "results": [ 00:11:00.200 { 00:11:00.200 "job": "raid_bdev1", 00:11:00.200 "core_mask": "0x1", 00:11:00.200 "workload": "randrw", 00:11:00.200 "percentage": 50, 00:11:00.200 "status": "finished", 00:11:00.200 "queue_depth": 1, 00:11:00.200 "io_size": 131072, 00:11:00.200 "runtime": 1.374191, 00:11:00.200 "iops": 13114.625259516326, 00:11:00.200 "mibps": 1639.3281574395407, 00:11:00.200 "io_failed": 0, 00:11:00.200 "io_timeout": 0, 00:11:00.200 "avg_latency_us": 73.81691101463083, 00:11:00.200 "min_latency_us": 21.910917030567685, 00:11:00.200 "max_latency_us": 1323.598253275109 00:11:00.200 } 00:11:00.200 ], 00:11:00.200 "core_count": 1 00:11:00.200 } 00:11:00.200 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86093 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 86093 ']' 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 86093 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86093 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86093' 00:11:00.201 killing process with pid 86093 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 86093 00:11:00.201 [2024-11-26 12:54:17.867707] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.201 12:54:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 86093 00:11:00.461 [2024-11-26 12:54:17.903431] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.461 12:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VdYF3KqiTa 00:11:00.461 12:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:00.461 12:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:00.722 12:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:00.722 12:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:00.722 12:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.722 12:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:00.722 12:54:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:00.722 00:11:00.722 real 0m3.300s 00:11:00.722 user 0m4.097s 00:11:00.722 sys 0m0.550s 00:11:00.722 12:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.722 12:54:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.722 ************************************ 00:11:00.722 END TEST raid_write_error_test 00:11:00.722 ************************************ 00:11:00.722 12:54:18 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:00.722 12:54:18 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:00.722 12:54:18 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:00.722 12:54:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:00.722 12:54:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.722 12:54:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.722 ************************************ 00:11:00.722 START TEST raid_rebuild_test 00:11:00.722 ************************************ 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86220 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86220 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86220 ']' 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.722 12:54:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.722 [2024-11-26 12:54:18.318238] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:00.722 [2024-11-26 12:54:18.318447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:00.722 Zero copy mechanism will not be used. 00:11:00.722 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86220 ] 00:11:00.982 [2024-11-26 12:54:18.477925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.982 [2024-11-26 12:54:18.522231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.982 [2024-11-26 12:54:18.564404] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.982 [2024-11-26 12:54:18.564537] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.553 BaseBdev1_malloc 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.553 [2024-11-26 12:54:19.162358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:01.553 [2024-11-26 12:54:19.162505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.553 [2024-11-26 12:54:19.162556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:01.553 [2024-11-26 12:54:19.162607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.553 [2024-11-26 12:54:19.164643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.553 [2024-11-26 12:54:19.164713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.553 BaseBdev1 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.553 BaseBdev2_malloc 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.553 [2024-11-26 12:54:19.206242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:01.553 [2024-11-26 12:54:19.206340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.553 [2024-11-26 12:54:19.206384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:01.553 [2024-11-26 12:54:19.206403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.553 [2024-11-26 12:54:19.210727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.553 [2024-11-26 12:54:19.210773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:01.553 BaseBdev2 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.553 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.813 spare_malloc 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.813 spare_delay 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.813 [2024-11-26 12:54:19.247891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:01.813 [2024-11-26 12:54:19.247946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.813 [2024-11-26 12:54:19.247984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:01.813 [2024-11-26 12:54:19.247992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.813 [2024-11-26 12:54:19.250025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.813 [2024-11-26 12:54:19.250061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:01.813 spare 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.813 [2024-11-26 12:54:19.259893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.813 [2024-11-26 12:54:19.261698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.813 [2024-11-26 12:54:19.261776] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:01.813 [2024-11-26 12:54:19.261787] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:01.813 [2024-11-26 12:54:19.262015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:01.813 [2024-11-26 12:54:19.262117] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:01.813 [2024-11-26 12:54:19.262129] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:01.813 [2024-11-26 12:54:19.262262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.813 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.813 "name": "raid_bdev1", 00:11:01.813 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:01.813 "strip_size_kb": 0, 00:11:01.813 "state": "online", 00:11:01.813 "raid_level": "raid1", 00:11:01.813 "superblock": false, 00:11:01.813 "num_base_bdevs": 2, 00:11:01.813 "num_base_bdevs_discovered": 2, 00:11:01.813 "num_base_bdevs_operational": 2, 00:11:01.813 "base_bdevs_list": [ 00:11:01.813 { 00:11:01.813 "name": "BaseBdev1", 00:11:01.813 "uuid": "1dd30bcd-0186-565e-aafe-cede449751d4", 00:11:01.814 "is_configured": true, 00:11:01.814 "data_offset": 0, 00:11:01.814 "data_size": 65536 00:11:01.814 }, 00:11:01.814 { 00:11:01.814 "name": "BaseBdev2", 00:11:01.814 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:01.814 "is_configured": true, 00:11:01.814 "data_offset": 0, 00:11:01.814 "data_size": 65536 00:11:01.814 } 00:11:01.814 ] 00:11:01.814 }' 00:11:01.814 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.814 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.073 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:02.073 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.073 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.073 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.073 [2024-11-26 12:54:19.715565] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.073 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:02.333 12:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:02.333 [2024-11-26 12:54:19.959001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:02.333 /dev/nbd0 00:11:02.333 12:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:02.333 12:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:02.333 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:02.333 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:02.333 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:02.333 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:02.333 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.593 1+0 records in 00:11:02.593 1+0 records out 00:11:02.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565402 s, 7.2 MB/s 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:02.593 12:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:05.888 65536+0 records in 00:11:05.888 65536+0 records out 00:11:05.888 33554432 bytes (34 MB, 32 MiB) copied, 3.48791 s, 9.6 MB/s 00:11:05.888 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:05.888 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:05.888 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:05.888 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:05.888 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:05.888 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:05.888 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:06.147 [2024-11-26 12:54:23.735224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.147 [2024-11-26 12:54:23.751311] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.147 "name": "raid_bdev1", 00:11:06.147 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:06.147 "strip_size_kb": 0, 00:11:06.147 "state": "online", 00:11:06.147 "raid_level": "raid1", 00:11:06.147 "superblock": false, 00:11:06.147 "num_base_bdevs": 2, 00:11:06.147 "num_base_bdevs_discovered": 1, 00:11:06.147 "num_base_bdevs_operational": 1, 00:11:06.147 "base_bdevs_list": [ 00:11:06.147 { 00:11:06.147 "name": null, 00:11:06.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.147 "is_configured": false, 00:11:06.147 "data_offset": 0, 00:11:06.147 "data_size": 65536 00:11:06.147 }, 00:11:06.147 { 00:11:06.147 "name": "BaseBdev2", 00:11:06.147 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:06.147 "is_configured": true, 00:11:06.147 "data_offset": 0, 00:11:06.147 "data_size": 65536 00:11:06.147 } 00:11:06.147 ] 00:11:06.147 }' 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.147 12:54:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.716 12:54:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:06.716 12:54:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.716 12:54:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.716 [2024-11-26 12:54:24.158600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:06.716 [2024-11-26 12:54:24.162781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:11:06.716 12:54:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.716 12:54:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:06.717 [2024-11-26 12:54:24.164688] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:07.710 "name": "raid_bdev1", 00:11:07.710 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:07.710 "strip_size_kb": 0, 00:11:07.710 "state": "online", 00:11:07.710 "raid_level": "raid1", 00:11:07.710 "superblock": false, 00:11:07.710 "num_base_bdevs": 2, 00:11:07.710 "num_base_bdevs_discovered": 2, 00:11:07.710 "num_base_bdevs_operational": 2, 00:11:07.710 "process": { 00:11:07.710 "type": "rebuild", 00:11:07.710 "target": "spare", 00:11:07.710 "progress": { 00:11:07.710 "blocks": 20480, 00:11:07.710 "percent": 31 00:11:07.710 } 00:11:07.710 }, 00:11:07.710 "base_bdevs_list": [ 00:11:07.710 { 00:11:07.710 "name": "spare", 00:11:07.710 "uuid": "7da4009b-53f0-5c9e-b2fb-c57d843f5131", 00:11:07.710 "is_configured": true, 00:11:07.710 "data_offset": 0, 00:11:07.710 "data_size": 65536 00:11:07.710 }, 00:11:07.710 { 00:11:07.710 "name": "BaseBdev2", 00:11:07.710 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:07.710 "is_configured": true, 00:11:07.710 "data_offset": 0, 00:11:07.710 "data_size": 65536 00:11:07.710 } 00:11:07.710 ] 00:11:07.710 }' 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.710 [2024-11-26 12:54:25.325215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:07.710 [2024-11-26 12:54:25.369036] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:07.710 [2024-11-26 12:54:25.369157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.710 [2024-11-26 12:54:25.369179] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:07.710 [2024-11-26 12:54:25.369188] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.710 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.970 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.970 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.970 "name": "raid_bdev1", 00:11:07.970 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:07.970 "strip_size_kb": 0, 00:11:07.970 "state": "online", 00:11:07.970 "raid_level": "raid1", 00:11:07.970 "superblock": false, 00:11:07.970 "num_base_bdevs": 2, 00:11:07.970 "num_base_bdevs_discovered": 1, 00:11:07.970 "num_base_bdevs_operational": 1, 00:11:07.970 "base_bdevs_list": [ 00:11:07.970 { 00:11:07.970 "name": null, 00:11:07.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.970 "is_configured": false, 00:11:07.970 "data_offset": 0, 00:11:07.970 "data_size": 65536 00:11:07.970 }, 00:11:07.970 { 00:11:07.970 "name": "BaseBdev2", 00:11:07.970 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:07.970 "is_configured": true, 00:11:07.970 "data_offset": 0, 00:11:07.970 "data_size": 65536 00:11:07.970 } 00:11:07.970 ] 00:11:07.970 }' 00:11:07.970 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.970 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:08.229 "name": "raid_bdev1", 00:11:08.229 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:08.229 "strip_size_kb": 0, 00:11:08.229 "state": "online", 00:11:08.229 "raid_level": "raid1", 00:11:08.229 "superblock": false, 00:11:08.229 "num_base_bdevs": 2, 00:11:08.229 "num_base_bdevs_discovered": 1, 00:11:08.229 "num_base_bdevs_operational": 1, 00:11:08.229 "base_bdevs_list": [ 00:11:08.229 { 00:11:08.229 "name": null, 00:11:08.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.229 "is_configured": false, 00:11:08.229 "data_offset": 0, 00:11:08.229 "data_size": 65536 00:11:08.229 }, 00:11:08.229 { 00:11:08.229 "name": "BaseBdev2", 00:11:08.229 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:08.229 "is_configured": true, 00:11:08.229 "data_offset": 0, 00:11:08.229 "data_size": 65536 00:11:08.229 } 00:11:08.229 ] 00:11:08.229 }' 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:08.229 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:08.488 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:08.488 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:08.488 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.488 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.488 [2024-11-26 12:54:25.956475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:08.488 [2024-11-26 12:54:25.960589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:11:08.488 12:54:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.488 12:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:08.488 [2024-11-26 12:54:25.962410] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:09.427 12:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:09.427 12:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.427 12:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:09.427 12:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:09.427 12:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.427 12:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.427 12:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.427 12:54:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.427 12:54:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.427 12:54:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.427 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.427 "name": "raid_bdev1", 00:11:09.427 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:09.427 "strip_size_kb": 0, 00:11:09.427 "state": "online", 00:11:09.427 "raid_level": "raid1", 00:11:09.427 "superblock": false, 00:11:09.427 "num_base_bdevs": 2, 00:11:09.427 "num_base_bdevs_discovered": 2, 00:11:09.427 "num_base_bdevs_operational": 2, 00:11:09.427 "process": { 00:11:09.427 "type": "rebuild", 00:11:09.427 "target": "spare", 00:11:09.427 "progress": { 00:11:09.427 "blocks": 20480, 00:11:09.427 "percent": 31 00:11:09.427 } 00:11:09.427 }, 00:11:09.427 "base_bdevs_list": [ 00:11:09.427 { 00:11:09.427 "name": "spare", 00:11:09.427 "uuid": "7da4009b-53f0-5c9e-b2fb-c57d843f5131", 00:11:09.427 "is_configured": true, 00:11:09.427 "data_offset": 0, 00:11:09.427 "data_size": 65536 00:11:09.427 }, 00:11:09.427 { 00:11:09.427 "name": "BaseBdev2", 00:11:09.427 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:09.427 "is_configured": true, 00:11:09.427 "data_offset": 0, 00:11:09.427 "data_size": 65536 00:11:09.427 } 00:11:09.427 ] 00:11:09.427 }' 00:11:09.427 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.427 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:09.427 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=291 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:09.687 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.688 "name": "raid_bdev1", 00:11:09.688 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:09.688 "strip_size_kb": 0, 00:11:09.688 "state": "online", 00:11:09.688 "raid_level": "raid1", 00:11:09.688 "superblock": false, 00:11:09.688 "num_base_bdevs": 2, 00:11:09.688 "num_base_bdevs_discovered": 2, 00:11:09.688 "num_base_bdevs_operational": 2, 00:11:09.688 "process": { 00:11:09.688 "type": "rebuild", 00:11:09.688 "target": "spare", 00:11:09.688 "progress": { 00:11:09.688 "blocks": 22528, 00:11:09.688 "percent": 34 00:11:09.688 } 00:11:09.688 }, 00:11:09.688 "base_bdevs_list": [ 00:11:09.688 { 00:11:09.688 "name": "spare", 00:11:09.688 "uuid": "7da4009b-53f0-5c9e-b2fb-c57d843f5131", 00:11:09.688 "is_configured": true, 00:11:09.688 "data_offset": 0, 00:11:09.688 "data_size": 65536 00:11:09.688 }, 00:11:09.688 { 00:11:09.688 "name": "BaseBdev2", 00:11:09.688 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:09.688 "is_configured": true, 00:11:09.688 "data_offset": 0, 00:11:09.688 "data_size": 65536 00:11:09.688 } 00:11:09.688 ] 00:11:09.688 }' 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:09.688 12:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.628 12:54:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.887 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.887 "name": "raid_bdev1", 00:11:10.887 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:10.887 "strip_size_kb": 0, 00:11:10.887 "state": "online", 00:11:10.887 "raid_level": "raid1", 00:11:10.887 "superblock": false, 00:11:10.887 "num_base_bdevs": 2, 00:11:10.887 "num_base_bdevs_discovered": 2, 00:11:10.887 "num_base_bdevs_operational": 2, 00:11:10.887 "process": { 00:11:10.887 "type": "rebuild", 00:11:10.887 "target": "spare", 00:11:10.887 "progress": { 00:11:10.887 "blocks": 47104, 00:11:10.887 "percent": 71 00:11:10.887 } 00:11:10.887 }, 00:11:10.887 "base_bdevs_list": [ 00:11:10.887 { 00:11:10.887 "name": "spare", 00:11:10.887 "uuid": "7da4009b-53f0-5c9e-b2fb-c57d843f5131", 00:11:10.887 "is_configured": true, 00:11:10.887 "data_offset": 0, 00:11:10.887 "data_size": 65536 00:11:10.887 }, 00:11:10.887 { 00:11:10.887 "name": "BaseBdev2", 00:11:10.887 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:10.887 "is_configured": true, 00:11:10.887 "data_offset": 0, 00:11:10.887 "data_size": 65536 00:11:10.887 } 00:11:10.887 ] 00:11:10.887 }' 00:11:10.887 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.887 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:10.887 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.887 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:10.887 12:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:11.825 [2024-11-26 12:54:29.172974] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:11.825 [2024-11-26 12:54:29.173040] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:11.825 [2024-11-26 12:54:29.173080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.825 "name": "raid_bdev1", 00:11:11.825 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:11.825 "strip_size_kb": 0, 00:11:11.825 "state": "online", 00:11:11.825 "raid_level": "raid1", 00:11:11.825 "superblock": false, 00:11:11.825 "num_base_bdevs": 2, 00:11:11.825 "num_base_bdevs_discovered": 2, 00:11:11.825 "num_base_bdevs_operational": 2, 00:11:11.825 "base_bdevs_list": [ 00:11:11.825 { 00:11:11.825 "name": "spare", 00:11:11.825 "uuid": "7da4009b-53f0-5c9e-b2fb-c57d843f5131", 00:11:11.825 "is_configured": true, 00:11:11.825 "data_offset": 0, 00:11:11.825 "data_size": 65536 00:11:11.825 }, 00:11:11.825 { 00:11:11.825 "name": "BaseBdev2", 00:11:11.825 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:11.825 "is_configured": true, 00:11:11.825 "data_offset": 0, 00:11:11.825 "data_size": 65536 00:11:11.825 } 00:11:11.825 ] 00:11:11.825 }' 00:11:11.825 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:12.085 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:12.086 "name": "raid_bdev1", 00:11:12.086 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:12.086 "strip_size_kb": 0, 00:11:12.086 "state": "online", 00:11:12.086 "raid_level": "raid1", 00:11:12.086 "superblock": false, 00:11:12.086 "num_base_bdevs": 2, 00:11:12.086 "num_base_bdevs_discovered": 2, 00:11:12.086 "num_base_bdevs_operational": 2, 00:11:12.086 "base_bdevs_list": [ 00:11:12.086 { 00:11:12.086 "name": "spare", 00:11:12.086 "uuid": "7da4009b-53f0-5c9e-b2fb-c57d843f5131", 00:11:12.086 "is_configured": true, 00:11:12.086 "data_offset": 0, 00:11:12.086 "data_size": 65536 00:11:12.086 }, 00:11:12.086 { 00:11:12.086 "name": "BaseBdev2", 00:11:12.086 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:12.086 "is_configured": true, 00:11:12.086 "data_offset": 0, 00:11:12.086 "data_size": 65536 00:11:12.086 } 00:11:12.086 ] 00:11:12.086 }' 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.086 "name": "raid_bdev1", 00:11:12.086 "uuid": "24f00b67-b060-4295-b4ad-1a6025aac7e1", 00:11:12.086 "strip_size_kb": 0, 00:11:12.086 "state": "online", 00:11:12.086 "raid_level": "raid1", 00:11:12.086 "superblock": false, 00:11:12.086 "num_base_bdevs": 2, 00:11:12.086 "num_base_bdevs_discovered": 2, 00:11:12.086 "num_base_bdevs_operational": 2, 00:11:12.086 "base_bdevs_list": [ 00:11:12.086 { 00:11:12.086 "name": "spare", 00:11:12.086 "uuid": "7da4009b-53f0-5c9e-b2fb-c57d843f5131", 00:11:12.086 "is_configured": true, 00:11:12.086 "data_offset": 0, 00:11:12.086 "data_size": 65536 00:11:12.086 }, 00:11:12.086 { 00:11:12.086 "name": "BaseBdev2", 00:11:12.086 "uuid": "8cc8e961-45cc-5c21-a2b4-9449b5ce798e", 00:11:12.086 "is_configured": true, 00:11:12.086 "data_offset": 0, 00:11:12.086 "data_size": 65536 00:11:12.086 } 00:11:12.086 ] 00:11:12.086 }' 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.086 12:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.701 [2024-11-26 12:54:30.143546] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.701 [2024-11-26 12:54:30.143577] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.701 [2024-11-26 12:54:30.143661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.701 [2024-11-26 12:54:30.143725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.701 [2024-11-26 12:54:30.143742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:12.701 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:12.702 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:12.702 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:12.702 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.702 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:12.702 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:12.702 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:12.702 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.702 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:12.961 /dev/nbd0 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.961 1+0 records in 00:11:12.961 1+0 records out 00:11:12.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478485 s, 8.6 MB/s 00:11:12.961 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.962 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:12.962 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.962 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:12.962 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:12.962 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.962 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.962 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:12.962 /dev/nbd1 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.222 1+0 records in 00:11:13.222 1+0 records out 00:11:13.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394662 s, 10.4 MB/s 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.222 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:13.482 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:13.482 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:13.482 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:13.482 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.482 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.482 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:13.482 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:13.482 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.482 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.482 12:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86220 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86220 ']' 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86220 00:11:13.482 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:13.742 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.742 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86220 00:11:13.742 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.742 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.742 killing process with pid 86220 00:11:13.742 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86220' 00:11:13.742 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86220 00:11:13.742 Received shutdown signal, test time was about 60.000000 seconds 00:11:13.742 00:11:13.742 Latency(us) 00:11:13.742 [2024-11-26T12:54:31.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.742 [2024-11-26T12:54:31.426Z] =================================================================================================================== 00:11:13.742 [2024-11-26T12:54:31.426Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:13.742 [2024-11-26 12:54:31.195828] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.742 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86220 00:11:13.742 [2024-11-26 12:54:31.226685] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:14.002 00:11:14.002 real 0m13.232s 00:11:14.002 user 0m15.464s 00:11:14.002 sys 0m2.705s 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.002 ************************************ 00:11:14.002 END TEST raid_rebuild_test 00:11:14.002 ************************************ 00:11:14.002 12:54:31 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:14.002 12:54:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:14.002 12:54:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.002 12:54:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.002 ************************************ 00:11:14.002 START TEST raid_rebuild_test_sb 00:11:14.002 ************************************ 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86615 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86615 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86615 ']' 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.002 12:54:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.002 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:14.002 Zero copy mechanism will not be used. 00:11:14.002 [2024-11-26 12:54:31.621231] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:14.002 [2024-11-26 12:54:31.621343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86615 ] 00:11:14.261 [2024-11-26 12:54:31.780748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.261 [2024-11-26 12:54:31.826152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.261 [2024-11-26 12:54:31.868745] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.261 [2024-11-26 12:54:31.868783] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.830 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.830 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:14.830 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:14.830 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:14.830 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.831 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.831 BaseBdev1_malloc 00:11:14.831 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.831 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:14.831 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.831 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.831 [2024-11-26 12:54:32.474763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:14.831 [2024-11-26 12:54:32.474834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.831 [2024-11-26 12:54:32.474858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:14.831 [2024-11-26 12:54:32.474878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.831 [2024-11-26 12:54:32.476950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.831 [2024-11-26 12:54:32.476986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:14.831 BaseBdev1 00:11:14.831 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.831 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:14.831 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:14.831 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.831 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.091 BaseBdev2_malloc 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.091 [2024-11-26 12:54:32.517272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:15.091 [2024-11-26 12:54:32.517406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.091 [2024-11-26 12:54:32.517467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:15.091 [2024-11-26 12:54:32.517497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.091 [2024-11-26 12:54:32.521929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.091 [2024-11-26 12:54:32.521995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:15.091 BaseBdev2 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.091 spare_malloc 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.091 spare_delay 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.091 [2024-11-26 12:54:32.560036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:15.091 [2024-11-26 12:54:32.560087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.091 [2024-11-26 12:54:32.560107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:15.091 [2024-11-26 12:54:32.560115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.091 [2024-11-26 12:54:32.562137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.091 [2024-11-26 12:54:32.562170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:15.091 spare 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.091 [2024-11-26 12:54:32.572055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.091 [2024-11-26 12:54:32.573824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.091 [2024-11-26 12:54:32.573968] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:15.091 [2024-11-26 12:54:32.573987] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.091 [2024-11-26 12:54:32.574238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:15.091 [2024-11-26 12:54:32.574372] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:15.091 [2024-11-26 12:54:32.574390] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:15.091 [2024-11-26 12:54:32.574504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.091 "name": "raid_bdev1", 00:11:15.091 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:15.091 "strip_size_kb": 0, 00:11:15.091 "state": "online", 00:11:15.091 "raid_level": "raid1", 00:11:15.091 "superblock": true, 00:11:15.091 "num_base_bdevs": 2, 00:11:15.091 "num_base_bdevs_discovered": 2, 00:11:15.091 "num_base_bdevs_operational": 2, 00:11:15.091 "base_bdevs_list": [ 00:11:15.091 { 00:11:15.091 "name": "BaseBdev1", 00:11:15.091 "uuid": "9c8d18c6-0dca-5580-b921-56df519006e3", 00:11:15.091 "is_configured": true, 00:11:15.091 "data_offset": 2048, 00:11:15.091 "data_size": 63488 00:11:15.091 }, 00:11:15.091 { 00:11:15.091 "name": "BaseBdev2", 00:11:15.091 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:15.091 "is_configured": true, 00:11:15.091 "data_offset": 2048, 00:11:15.091 "data_size": 63488 00:11:15.091 } 00:11:15.091 ] 00:11:15.091 }' 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.091 12:54:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.351 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.351 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:15.351 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.351 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.351 [2024-11-26 12:54:33.023572] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:15.610 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:15.611 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:15.611 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:15.611 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:15.611 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:15.611 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:15.611 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:15.611 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:15.611 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:15.611 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:15.611 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:15.611 [2024-11-26 12:54:33.278947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:15.870 /dev/nbd0 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.870 1+0 records in 00:11:15.870 1+0 records out 00:11:15.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248408 s, 16.5 MB/s 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:15.870 12:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:19.163 63488+0 records in 00:11:19.163 63488+0 records out 00:11:19.163 32505856 bytes (33 MB, 31 MiB) copied, 3.4621 s, 9.4 MB/s 00:11:19.163 12:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:19.163 12:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:19.163 12:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:19.163 12:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:19.163 12:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:19.163 12:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.163 12:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:19.423 [2024-11-26 12:54:36.988575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.423 [2024-11-26 12:54:37.024829] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.423 "name": "raid_bdev1", 00:11:19.423 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:19.423 "strip_size_kb": 0, 00:11:19.423 "state": "online", 00:11:19.423 "raid_level": "raid1", 00:11:19.423 "superblock": true, 00:11:19.423 "num_base_bdevs": 2, 00:11:19.423 "num_base_bdevs_discovered": 1, 00:11:19.423 "num_base_bdevs_operational": 1, 00:11:19.423 "base_bdevs_list": [ 00:11:19.423 { 00:11:19.423 "name": null, 00:11:19.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.423 "is_configured": false, 00:11:19.423 "data_offset": 0, 00:11:19.423 "data_size": 63488 00:11:19.423 }, 00:11:19.423 { 00:11:19.423 "name": "BaseBdev2", 00:11:19.423 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:19.423 "is_configured": true, 00:11:19.423 "data_offset": 2048, 00:11:19.423 "data_size": 63488 00:11:19.423 } 00:11:19.423 ] 00:11:19.423 }' 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.423 12:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.991 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:19.991 12:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.991 12:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.991 [2024-11-26 12:54:37.504035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:19.991 [2024-11-26 12:54:37.508293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:11:19.991 12:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.991 12:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:19.991 [2024-11-26 12:54:37.510275] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.930 "name": "raid_bdev1", 00:11:20.930 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:20.930 "strip_size_kb": 0, 00:11:20.930 "state": "online", 00:11:20.930 "raid_level": "raid1", 00:11:20.930 "superblock": true, 00:11:20.930 "num_base_bdevs": 2, 00:11:20.930 "num_base_bdevs_discovered": 2, 00:11:20.930 "num_base_bdevs_operational": 2, 00:11:20.930 "process": { 00:11:20.930 "type": "rebuild", 00:11:20.930 "target": "spare", 00:11:20.930 "progress": { 00:11:20.930 "blocks": 20480, 00:11:20.930 "percent": 32 00:11:20.930 } 00:11:20.930 }, 00:11:20.930 "base_bdevs_list": [ 00:11:20.930 { 00:11:20.930 "name": "spare", 00:11:20.930 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:20.930 "is_configured": true, 00:11:20.930 "data_offset": 2048, 00:11:20.930 "data_size": 63488 00:11:20.930 }, 00:11:20.930 { 00:11:20.930 "name": "BaseBdev2", 00:11:20.930 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:20.930 "is_configured": true, 00:11:20.930 "data_offset": 2048, 00:11:20.930 "data_size": 63488 00:11:20.930 } 00:11:20.930 ] 00:11:20.930 }' 00:11:20.930 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.190 [2024-11-26 12:54:38.650905] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:21.190 [2024-11-26 12:54:38.714680] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:21.190 [2024-11-26 12:54:38.714766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.190 [2024-11-26 12:54:38.714787] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:21.190 [2024-11-26 12:54:38.714795] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.190 "name": "raid_bdev1", 00:11:21.190 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:21.190 "strip_size_kb": 0, 00:11:21.190 "state": "online", 00:11:21.190 "raid_level": "raid1", 00:11:21.190 "superblock": true, 00:11:21.190 "num_base_bdevs": 2, 00:11:21.190 "num_base_bdevs_discovered": 1, 00:11:21.190 "num_base_bdevs_operational": 1, 00:11:21.190 "base_bdevs_list": [ 00:11:21.190 { 00:11:21.190 "name": null, 00:11:21.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.190 "is_configured": false, 00:11:21.190 "data_offset": 0, 00:11:21.190 "data_size": 63488 00:11:21.190 }, 00:11:21.190 { 00:11:21.190 "name": "BaseBdev2", 00:11:21.190 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:21.190 "is_configured": true, 00:11:21.190 "data_offset": 2048, 00:11:21.190 "data_size": 63488 00:11:21.190 } 00:11:21.190 ] 00:11:21.190 }' 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.190 12:54:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.760 "name": "raid_bdev1", 00:11:21.760 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:21.760 "strip_size_kb": 0, 00:11:21.760 "state": "online", 00:11:21.760 "raid_level": "raid1", 00:11:21.760 "superblock": true, 00:11:21.760 "num_base_bdevs": 2, 00:11:21.760 "num_base_bdevs_discovered": 1, 00:11:21.760 "num_base_bdevs_operational": 1, 00:11:21.760 "base_bdevs_list": [ 00:11:21.760 { 00:11:21.760 "name": null, 00:11:21.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.760 "is_configured": false, 00:11:21.760 "data_offset": 0, 00:11:21.760 "data_size": 63488 00:11:21.760 }, 00:11:21.760 { 00:11:21.760 "name": "BaseBdev2", 00:11:21.760 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:21.760 "is_configured": true, 00:11:21.760 "data_offset": 2048, 00:11:21.760 "data_size": 63488 00:11:21.760 } 00:11:21.760 ] 00:11:21.760 }' 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.760 [2024-11-26 12:54:39.342106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:21.760 [2024-11-26 12:54:39.346191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.760 12:54:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:21.760 [2024-11-26 12:54:39.348081] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:22.699 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:22.700 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.700 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:22.700 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:22.700 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.700 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.700 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.700 12:54:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.700 12:54:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.700 12:54:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.959 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.959 "name": "raid_bdev1", 00:11:22.959 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:22.959 "strip_size_kb": 0, 00:11:22.959 "state": "online", 00:11:22.959 "raid_level": "raid1", 00:11:22.959 "superblock": true, 00:11:22.959 "num_base_bdevs": 2, 00:11:22.959 "num_base_bdevs_discovered": 2, 00:11:22.959 "num_base_bdevs_operational": 2, 00:11:22.959 "process": { 00:11:22.959 "type": "rebuild", 00:11:22.959 "target": "spare", 00:11:22.959 "progress": { 00:11:22.959 "blocks": 20480, 00:11:22.959 "percent": 32 00:11:22.959 } 00:11:22.959 }, 00:11:22.959 "base_bdevs_list": [ 00:11:22.959 { 00:11:22.959 "name": "spare", 00:11:22.959 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:22.959 "is_configured": true, 00:11:22.959 "data_offset": 2048, 00:11:22.959 "data_size": 63488 00:11:22.959 }, 00:11:22.959 { 00:11:22.959 "name": "BaseBdev2", 00:11:22.959 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:22.959 "is_configured": true, 00:11:22.959 "data_offset": 2048, 00:11:22.959 "data_size": 63488 00:11:22.959 } 00:11:22.959 ] 00:11:22.959 }' 00:11:22.959 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.959 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:22.959 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.959 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:22.959 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:22.960 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=304 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.960 "name": "raid_bdev1", 00:11:22.960 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:22.960 "strip_size_kb": 0, 00:11:22.960 "state": "online", 00:11:22.960 "raid_level": "raid1", 00:11:22.960 "superblock": true, 00:11:22.960 "num_base_bdevs": 2, 00:11:22.960 "num_base_bdevs_discovered": 2, 00:11:22.960 "num_base_bdevs_operational": 2, 00:11:22.960 "process": { 00:11:22.960 "type": "rebuild", 00:11:22.960 "target": "spare", 00:11:22.960 "progress": { 00:11:22.960 "blocks": 22528, 00:11:22.960 "percent": 35 00:11:22.960 } 00:11:22.960 }, 00:11:22.960 "base_bdevs_list": [ 00:11:22.960 { 00:11:22.960 "name": "spare", 00:11:22.960 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:22.960 "is_configured": true, 00:11:22.960 "data_offset": 2048, 00:11:22.960 "data_size": 63488 00:11:22.960 }, 00:11:22.960 { 00:11:22.960 "name": "BaseBdev2", 00:11:22.960 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:22.960 "is_configured": true, 00:11:22.960 "data_offset": 2048, 00:11:22.960 "data_size": 63488 00:11:22.960 } 00:11:22.960 ] 00:11:22.960 }' 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:22.960 12:54:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:24.366 "name": "raid_bdev1", 00:11:24.366 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:24.366 "strip_size_kb": 0, 00:11:24.366 "state": "online", 00:11:24.366 "raid_level": "raid1", 00:11:24.366 "superblock": true, 00:11:24.366 "num_base_bdevs": 2, 00:11:24.366 "num_base_bdevs_discovered": 2, 00:11:24.366 "num_base_bdevs_operational": 2, 00:11:24.366 "process": { 00:11:24.366 "type": "rebuild", 00:11:24.366 "target": "spare", 00:11:24.366 "progress": { 00:11:24.366 "blocks": 45056, 00:11:24.366 "percent": 70 00:11:24.366 } 00:11:24.366 }, 00:11:24.366 "base_bdevs_list": [ 00:11:24.366 { 00:11:24.366 "name": "spare", 00:11:24.366 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:24.366 "is_configured": true, 00:11:24.366 "data_offset": 2048, 00:11:24.366 "data_size": 63488 00:11:24.366 }, 00:11:24.366 { 00:11:24.366 "name": "BaseBdev2", 00:11:24.366 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:24.366 "is_configured": true, 00:11:24.366 "data_offset": 2048, 00:11:24.366 "data_size": 63488 00:11:24.366 } 00:11:24.366 ] 00:11:24.366 }' 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:24.366 12:54:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:24.936 [2024-11-26 12:54:42.458051] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:24.936 [2024-11-26 12:54:42.458130] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:24.936 [2024-11-26 12:54:42.458248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.194 "name": "raid_bdev1", 00:11:25.194 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:25.194 "strip_size_kb": 0, 00:11:25.194 "state": "online", 00:11:25.194 "raid_level": "raid1", 00:11:25.194 "superblock": true, 00:11:25.194 "num_base_bdevs": 2, 00:11:25.194 "num_base_bdevs_discovered": 2, 00:11:25.194 "num_base_bdevs_operational": 2, 00:11:25.194 "base_bdevs_list": [ 00:11:25.194 { 00:11:25.194 "name": "spare", 00:11:25.194 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:25.194 "is_configured": true, 00:11:25.194 "data_offset": 2048, 00:11:25.194 "data_size": 63488 00:11:25.194 }, 00:11:25.194 { 00:11:25.194 "name": "BaseBdev2", 00:11:25.194 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:25.194 "is_configured": true, 00:11:25.194 "data_offset": 2048, 00:11:25.194 "data_size": 63488 00:11:25.194 } 00:11:25.194 ] 00:11:25.194 }' 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:25.194 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.454 "name": "raid_bdev1", 00:11:25.454 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:25.454 "strip_size_kb": 0, 00:11:25.454 "state": "online", 00:11:25.454 "raid_level": "raid1", 00:11:25.454 "superblock": true, 00:11:25.454 "num_base_bdevs": 2, 00:11:25.454 "num_base_bdevs_discovered": 2, 00:11:25.454 "num_base_bdevs_operational": 2, 00:11:25.454 "base_bdevs_list": [ 00:11:25.454 { 00:11:25.454 "name": "spare", 00:11:25.454 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:25.454 "is_configured": true, 00:11:25.454 "data_offset": 2048, 00:11:25.454 "data_size": 63488 00:11:25.454 }, 00:11:25.454 { 00:11:25.454 "name": "BaseBdev2", 00:11:25.454 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:25.454 "is_configured": true, 00:11:25.454 "data_offset": 2048, 00:11:25.454 "data_size": 63488 00:11:25.454 } 00:11:25.454 ] 00:11:25.454 }' 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.454 12:54:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.454 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.454 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.454 "name": "raid_bdev1", 00:11:25.454 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:25.454 "strip_size_kb": 0, 00:11:25.454 "state": "online", 00:11:25.454 "raid_level": "raid1", 00:11:25.454 "superblock": true, 00:11:25.454 "num_base_bdevs": 2, 00:11:25.454 "num_base_bdevs_discovered": 2, 00:11:25.454 "num_base_bdevs_operational": 2, 00:11:25.454 "base_bdevs_list": [ 00:11:25.454 { 00:11:25.454 "name": "spare", 00:11:25.454 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:25.454 "is_configured": true, 00:11:25.454 "data_offset": 2048, 00:11:25.454 "data_size": 63488 00:11:25.454 }, 00:11:25.454 { 00:11:25.454 "name": "BaseBdev2", 00:11:25.454 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:25.454 "is_configured": true, 00:11:25.454 "data_offset": 2048, 00:11:25.454 "data_size": 63488 00:11:25.454 } 00:11:25.454 ] 00:11:25.454 }' 00:11:25.454 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.454 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.713 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.713 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.713 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.713 [2024-11-26 12:54:43.384853] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.713 [2024-11-26 12:54:43.384884] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.713 [2024-11-26 12:54:43.384998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.713 [2024-11-26 12:54:43.385067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.713 [2024-11-26 12:54:43.385086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:26.001 /dev/nbd0 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.001 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.002 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.002 1+0 records in 00:11:26.002 1+0 records out 00:11:26.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034511 s, 11.9 MB/s 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:26.262 /dev/nbd1 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.262 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.521 1+0 records in 00:11:26.521 1+0 records out 00:11:26.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415758 s, 9.9 MB/s 00:11:26.521 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.521 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:26.521 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.521 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.521 12:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:26.521 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.521 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:26.521 12:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:26.521 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:26.521 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:26.521 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:26.521 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:26.521 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:26.521 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.521 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.780 [2024-11-26 12:54:44.446929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:26.780 [2024-11-26 12:54:44.446998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.780 [2024-11-26 12:54:44.447019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:26.780 [2024-11-26 12:54:44.447030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.780 [2024-11-26 12:54:44.449312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.780 [2024-11-26 12:54:44.449350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:26.780 [2024-11-26 12:54:44.449444] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:26.780 [2024-11-26 12:54:44.449509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:26.780 [2024-11-26 12:54:44.449630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.780 spare 00:11:26.780 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.781 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:26.781 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.781 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.040 [2024-11-26 12:54:44.549549] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:27.040 [2024-11-26 12:54:44.549581] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:27.040 [2024-11-26 12:54:44.549842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:11:27.040 [2024-11-26 12:54:44.549990] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:27.040 [2024-11-26 12:54:44.550018] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:27.040 [2024-11-26 12:54:44.550153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.040 "name": "raid_bdev1", 00:11:27.040 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:27.040 "strip_size_kb": 0, 00:11:27.040 "state": "online", 00:11:27.040 "raid_level": "raid1", 00:11:27.040 "superblock": true, 00:11:27.040 "num_base_bdevs": 2, 00:11:27.040 "num_base_bdevs_discovered": 2, 00:11:27.040 "num_base_bdevs_operational": 2, 00:11:27.040 "base_bdevs_list": [ 00:11:27.040 { 00:11:27.040 "name": "spare", 00:11:27.040 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:27.040 "is_configured": true, 00:11:27.040 "data_offset": 2048, 00:11:27.040 "data_size": 63488 00:11:27.040 }, 00:11:27.040 { 00:11:27.040 "name": "BaseBdev2", 00:11:27.040 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:27.040 "is_configured": true, 00:11:27.040 "data_offset": 2048, 00:11:27.040 "data_size": 63488 00:11:27.040 } 00:11:27.040 ] 00:11:27.040 }' 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.040 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.299 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:27.299 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.299 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:27.299 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:27.299 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.299 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.299 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.299 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.299 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.299 12:54:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.559 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.559 "name": "raid_bdev1", 00:11:27.559 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:27.559 "strip_size_kb": 0, 00:11:27.559 "state": "online", 00:11:27.559 "raid_level": "raid1", 00:11:27.559 "superblock": true, 00:11:27.559 "num_base_bdevs": 2, 00:11:27.559 "num_base_bdevs_discovered": 2, 00:11:27.559 "num_base_bdevs_operational": 2, 00:11:27.559 "base_bdevs_list": [ 00:11:27.559 { 00:11:27.559 "name": "spare", 00:11:27.559 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:27.559 "is_configured": true, 00:11:27.559 "data_offset": 2048, 00:11:27.559 "data_size": 63488 00:11:27.559 }, 00:11:27.559 { 00:11:27.559 "name": "BaseBdev2", 00:11:27.559 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:27.559 "is_configured": true, 00:11:27.559 "data_offset": 2048, 00:11:27.559 "data_size": 63488 00:11:27.559 } 00:11:27.559 ] 00:11:27.559 }' 00:11:27.559 12:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.559 [2024-11-26 12:54:45.117815] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.559 "name": "raid_bdev1", 00:11:27.559 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:27.559 "strip_size_kb": 0, 00:11:27.559 "state": "online", 00:11:27.559 "raid_level": "raid1", 00:11:27.559 "superblock": true, 00:11:27.559 "num_base_bdevs": 2, 00:11:27.559 "num_base_bdevs_discovered": 1, 00:11:27.559 "num_base_bdevs_operational": 1, 00:11:27.559 "base_bdevs_list": [ 00:11:27.559 { 00:11:27.559 "name": null, 00:11:27.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.559 "is_configured": false, 00:11:27.559 "data_offset": 0, 00:11:27.559 "data_size": 63488 00:11:27.559 }, 00:11:27.559 { 00:11:27.559 "name": "BaseBdev2", 00:11:27.559 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:27.559 "is_configured": true, 00:11:27.559 "data_offset": 2048, 00:11:27.559 "data_size": 63488 00:11:27.559 } 00:11:27.559 ] 00:11:27.559 }' 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.559 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.129 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:28.129 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.129 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.129 [2024-11-26 12:54:45.537208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.129 [2024-11-26 12:54:45.537382] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:28.129 [2024-11-26 12:54:45.537404] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:28.129 [2024-11-26 12:54:45.537443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.129 [2024-11-26 12:54:45.541465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:11:28.129 12:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.129 12:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:28.129 [2024-11-26 12:54:45.543316] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.068 "name": "raid_bdev1", 00:11:29.068 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:29.068 "strip_size_kb": 0, 00:11:29.068 "state": "online", 00:11:29.068 "raid_level": "raid1", 00:11:29.068 "superblock": true, 00:11:29.068 "num_base_bdevs": 2, 00:11:29.068 "num_base_bdevs_discovered": 2, 00:11:29.068 "num_base_bdevs_operational": 2, 00:11:29.068 "process": { 00:11:29.068 "type": "rebuild", 00:11:29.068 "target": "spare", 00:11:29.068 "progress": { 00:11:29.068 "blocks": 20480, 00:11:29.068 "percent": 32 00:11:29.068 } 00:11:29.068 }, 00:11:29.068 "base_bdevs_list": [ 00:11:29.068 { 00:11:29.068 "name": "spare", 00:11:29.068 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:29.068 "is_configured": true, 00:11:29.068 "data_offset": 2048, 00:11:29.068 "data_size": 63488 00:11:29.068 }, 00:11:29.068 { 00:11:29.068 "name": "BaseBdev2", 00:11:29.068 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:29.068 "is_configured": true, 00:11:29.068 "data_offset": 2048, 00:11:29.068 "data_size": 63488 00:11:29.068 } 00:11:29.068 ] 00:11:29.068 }' 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.068 12:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.068 [2024-11-26 12:54:46.680195] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:29.328 [2024-11-26 12:54:46.747124] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:29.328 [2024-11-26 12:54:46.747185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.328 [2024-11-26 12:54:46.747202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:29.328 [2024-11-26 12:54:46.747210] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.328 "name": "raid_bdev1", 00:11:29.328 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:29.328 "strip_size_kb": 0, 00:11:29.328 "state": "online", 00:11:29.328 "raid_level": "raid1", 00:11:29.328 "superblock": true, 00:11:29.328 "num_base_bdevs": 2, 00:11:29.328 "num_base_bdevs_discovered": 1, 00:11:29.328 "num_base_bdevs_operational": 1, 00:11:29.328 "base_bdevs_list": [ 00:11:29.328 { 00:11:29.328 "name": null, 00:11:29.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.328 "is_configured": false, 00:11:29.328 "data_offset": 0, 00:11:29.328 "data_size": 63488 00:11:29.328 }, 00:11:29.328 { 00:11:29.328 "name": "BaseBdev2", 00:11:29.328 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:29.328 "is_configured": true, 00:11:29.328 "data_offset": 2048, 00:11:29.328 "data_size": 63488 00:11:29.328 } 00:11:29.328 ] 00:11:29.328 }' 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.328 12:54:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.588 12:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:29.588 12:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.588 12:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.588 [2024-11-26 12:54:47.222546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:29.589 [2024-11-26 12:54:47.222600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.589 [2024-11-26 12:54:47.222622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:29.589 [2024-11-26 12:54:47.222632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.589 [2024-11-26 12:54:47.223049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.589 [2024-11-26 12:54:47.223066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:29.589 [2024-11-26 12:54:47.223143] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:29.589 [2024-11-26 12:54:47.223154] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:29.589 [2024-11-26 12:54:47.223192] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:29.589 [2024-11-26 12:54:47.223221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:29.589 [2024-11-26 12:54:47.227066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:29.589 spare 00:11:29.589 12:54:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.589 12:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:29.589 [2024-11-26 12:54:47.228973] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.969 "name": "raid_bdev1", 00:11:30.969 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:30.969 "strip_size_kb": 0, 00:11:30.969 "state": "online", 00:11:30.969 "raid_level": "raid1", 00:11:30.969 "superblock": true, 00:11:30.969 "num_base_bdevs": 2, 00:11:30.969 "num_base_bdevs_discovered": 2, 00:11:30.969 "num_base_bdevs_operational": 2, 00:11:30.969 "process": { 00:11:30.969 "type": "rebuild", 00:11:30.969 "target": "spare", 00:11:30.969 "progress": { 00:11:30.969 "blocks": 20480, 00:11:30.969 "percent": 32 00:11:30.969 } 00:11:30.969 }, 00:11:30.969 "base_bdevs_list": [ 00:11:30.969 { 00:11:30.969 "name": "spare", 00:11:30.969 "uuid": "ae32980f-4e29-5a4c-bc0e-c5c5b9029bc8", 00:11:30.969 "is_configured": true, 00:11:30.969 "data_offset": 2048, 00:11:30.969 "data_size": 63488 00:11:30.969 }, 00:11:30.969 { 00:11:30.969 "name": "BaseBdev2", 00:11:30.969 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:30.969 "is_configured": true, 00:11:30.969 "data_offset": 2048, 00:11:30.969 "data_size": 63488 00:11:30.969 } 00:11:30.969 ] 00:11:30.969 }' 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.969 [2024-11-26 12:54:48.381880] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:30.969 [2024-11-26 12:54:48.432867] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:30.969 [2024-11-26 12:54:48.432925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.969 [2024-11-26 12:54:48.432939] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:30.969 [2024-11-26 12:54:48.432948] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.969 "name": "raid_bdev1", 00:11:30.969 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:30.969 "strip_size_kb": 0, 00:11:30.969 "state": "online", 00:11:30.969 "raid_level": "raid1", 00:11:30.969 "superblock": true, 00:11:30.969 "num_base_bdevs": 2, 00:11:30.969 "num_base_bdevs_discovered": 1, 00:11:30.969 "num_base_bdevs_operational": 1, 00:11:30.969 "base_bdevs_list": [ 00:11:30.969 { 00:11:30.969 "name": null, 00:11:30.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.969 "is_configured": false, 00:11:30.969 "data_offset": 0, 00:11:30.969 "data_size": 63488 00:11:30.969 }, 00:11:30.969 { 00:11:30.969 "name": "BaseBdev2", 00:11:30.969 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:30.969 "is_configured": true, 00:11:30.969 "data_offset": 2048, 00:11:30.969 "data_size": 63488 00:11:30.969 } 00:11:30.969 ] 00:11:30.969 }' 00:11:30.969 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.970 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.230 "name": "raid_bdev1", 00:11:31.230 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:31.230 "strip_size_kb": 0, 00:11:31.230 "state": "online", 00:11:31.230 "raid_level": "raid1", 00:11:31.230 "superblock": true, 00:11:31.230 "num_base_bdevs": 2, 00:11:31.230 "num_base_bdevs_discovered": 1, 00:11:31.230 "num_base_bdevs_operational": 1, 00:11:31.230 "base_bdevs_list": [ 00:11:31.230 { 00:11:31.230 "name": null, 00:11:31.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.230 "is_configured": false, 00:11:31.230 "data_offset": 0, 00:11:31.230 "data_size": 63488 00:11:31.230 }, 00:11:31.230 { 00:11:31.230 "name": "BaseBdev2", 00:11:31.230 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:31.230 "is_configured": true, 00:11:31.230 "data_offset": 2048, 00:11:31.230 "data_size": 63488 00:11:31.230 } 00:11:31.230 ] 00:11:31.230 }' 00:11:31.230 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.490 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:31.490 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.490 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:31.490 12:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:31.490 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.490 12:54:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.490 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.490 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:31.490 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.490 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.490 [2024-11-26 12:54:49.016158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:31.490 [2024-11-26 12:54:49.016224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.490 [2024-11-26 12:54:49.016243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:31.490 [2024-11-26 12:54:49.016254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.490 [2024-11-26 12:54:49.016664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.490 [2024-11-26 12:54:49.016690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:31.490 [2024-11-26 12:54:49.016755] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:31.490 [2024-11-26 12:54:49.016773] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:31.490 [2024-11-26 12:54:49.016794] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:31.490 [2024-11-26 12:54:49.016806] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:31.490 BaseBdev1 00:11:31.490 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.490 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.428 "name": "raid_bdev1", 00:11:32.428 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:32.428 "strip_size_kb": 0, 00:11:32.428 "state": "online", 00:11:32.428 "raid_level": "raid1", 00:11:32.428 "superblock": true, 00:11:32.428 "num_base_bdevs": 2, 00:11:32.428 "num_base_bdevs_discovered": 1, 00:11:32.428 "num_base_bdevs_operational": 1, 00:11:32.428 "base_bdevs_list": [ 00:11:32.428 { 00:11:32.429 "name": null, 00:11:32.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.429 "is_configured": false, 00:11:32.429 "data_offset": 0, 00:11:32.429 "data_size": 63488 00:11:32.429 }, 00:11:32.429 { 00:11:32.429 "name": "BaseBdev2", 00:11:32.429 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:32.429 "is_configured": true, 00:11:32.429 "data_offset": 2048, 00:11:32.429 "data_size": 63488 00:11:32.429 } 00:11:32.429 ] 00:11:32.429 }' 00:11:32.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.998 "name": "raid_bdev1", 00:11:32.998 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:32.998 "strip_size_kb": 0, 00:11:32.998 "state": "online", 00:11:32.998 "raid_level": "raid1", 00:11:32.998 "superblock": true, 00:11:32.998 "num_base_bdevs": 2, 00:11:32.998 "num_base_bdevs_discovered": 1, 00:11:32.998 "num_base_bdevs_operational": 1, 00:11:32.998 "base_bdevs_list": [ 00:11:32.998 { 00:11:32.998 "name": null, 00:11:32.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.998 "is_configured": false, 00:11:32.998 "data_offset": 0, 00:11:32.998 "data_size": 63488 00:11:32.998 }, 00:11:32.998 { 00:11:32.998 "name": "BaseBdev2", 00:11:32.998 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:32.998 "is_configured": true, 00:11:32.998 "data_offset": 2048, 00:11:32.998 "data_size": 63488 00:11:32.998 } 00:11:32.998 ] 00:11:32.998 }' 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.998 [2024-11-26 12:54:50.665336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.998 [2024-11-26 12:54:50.665500] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:32.998 [2024-11-26 12:54:50.665512] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:32.998 request: 00:11:32.998 { 00:11:32.998 "base_bdev": "BaseBdev1", 00:11:32.998 "raid_bdev": "raid_bdev1", 00:11:32.998 "method": "bdev_raid_add_base_bdev", 00:11:32.998 "req_id": 1 00:11:32.998 } 00:11:32.998 Got JSON-RPC error response 00:11:32.998 response: 00:11:32.998 { 00:11:32.998 "code": -22, 00:11:32.998 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:32.998 } 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:32.998 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.379 "name": "raid_bdev1", 00:11:34.379 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:34.379 "strip_size_kb": 0, 00:11:34.379 "state": "online", 00:11:34.379 "raid_level": "raid1", 00:11:34.379 "superblock": true, 00:11:34.379 "num_base_bdevs": 2, 00:11:34.379 "num_base_bdevs_discovered": 1, 00:11:34.379 "num_base_bdevs_operational": 1, 00:11:34.379 "base_bdevs_list": [ 00:11:34.379 { 00:11:34.379 "name": null, 00:11:34.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.379 "is_configured": false, 00:11:34.379 "data_offset": 0, 00:11:34.379 "data_size": 63488 00:11:34.379 }, 00:11:34.379 { 00:11:34.379 "name": "BaseBdev2", 00:11:34.379 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:34.379 "is_configured": true, 00:11:34.379 "data_offset": 2048, 00:11:34.379 "data_size": 63488 00:11:34.379 } 00:11:34.379 ] 00:11:34.379 }' 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.379 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.639 "name": "raid_bdev1", 00:11:34.639 "uuid": "cd2c116d-b50a-4eee-a229-b8ddcb4011db", 00:11:34.639 "strip_size_kb": 0, 00:11:34.639 "state": "online", 00:11:34.639 "raid_level": "raid1", 00:11:34.639 "superblock": true, 00:11:34.639 "num_base_bdevs": 2, 00:11:34.639 "num_base_bdevs_discovered": 1, 00:11:34.639 "num_base_bdevs_operational": 1, 00:11:34.639 "base_bdevs_list": [ 00:11:34.639 { 00:11:34.639 "name": null, 00:11:34.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.639 "is_configured": false, 00:11:34.639 "data_offset": 0, 00:11:34.639 "data_size": 63488 00:11:34.639 }, 00:11:34.639 { 00:11:34.639 "name": "BaseBdev2", 00:11:34.639 "uuid": "4032f1fe-734f-5acb-a7dc-603148f30835", 00:11:34.639 "is_configured": true, 00:11:34.639 "data_offset": 2048, 00:11:34.639 "data_size": 63488 00:11:34.639 } 00:11:34.639 ] 00:11:34.639 }' 00:11:34.639 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86615 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86615 ']' 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86615 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86615 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:34.640 killing process with pid 86615 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86615' 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86615 00:11:34.640 Received shutdown signal, test time was about 60.000000 seconds 00:11:34.640 00:11:34.640 Latency(us) 00:11:34.640 [2024-11-26T12:54:52.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.640 [2024-11-26T12:54:52.324Z] =================================================================================================================== 00:11:34.640 [2024-11-26T12:54:52.324Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:34.640 [2024-11-26 12:54:52.253753] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.640 [2024-11-26 12:54:52.253882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.640 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86615 00:11:34.640 [2024-11-26 12:54:52.253942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.640 [2024-11-26 12:54:52.253952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:34.640 [2024-11-26 12:54:52.285107] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.902 12:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:34.902 00:11:34.902 real 0m20.992s 00:11:34.902 user 0m26.030s 00:11:34.902 sys 0m3.418s 00:11:34.902 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.902 12:54:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.902 ************************************ 00:11:34.902 END TEST raid_rebuild_test_sb 00:11:34.902 ************************************ 00:11:35.163 12:54:52 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:35.163 12:54:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:35.163 12:54:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.163 12:54:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.163 ************************************ 00:11:35.163 START TEST raid_rebuild_test_io 00:11:35.163 ************************************ 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:35.163 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87326 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87326 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87326 ']' 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.164 12:54:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.164 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:35.164 Zero copy mechanism will not be used. 00:11:35.164 [2024-11-26 12:54:52.687069] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:35.164 [2024-11-26 12:54:52.687214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87326 ] 00:11:35.164 [2024-11-26 12:54:52.826462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.423 [2024-11-26 12:54:52.872171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.423 [2024-11-26 12:54:52.915663] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.423 [2024-11-26 12:54:52.915702] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.993 BaseBdev1_malloc 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.993 [2024-11-26 12:54:53.538321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:35.993 [2024-11-26 12:54:53.538385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.993 [2024-11-26 12:54:53.538430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:35.993 [2024-11-26 12:54:53.538445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.993 [2024-11-26 12:54:53.540491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.993 [2024-11-26 12:54:53.540529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:35.993 BaseBdev1 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.993 BaseBdev2_malloc 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.993 [2024-11-26 12:54:53.583469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:35.993 [2024-11-26 12:54:53.583611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.993 [2024-11-26 12:54:53.583672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:35.993 [2024-11-26 12:54:53.583703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.993 [2024-11-26 12:54:53.588210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.993 [2024-11-26 12:54:53.588273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:35.993 BaseBdev2 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.993 spare_malloc 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.993 spare_delay 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.993 [2024-11-26 12:54:53.626167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:35.993 [2024-11-26 12:54:53.626228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.993 [2024-11-26 12:54:53.626266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:35.993 [2024-11-26 12:54:53.626275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.993 [2024-11-26 12:54:53.628322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.993 [2024-11-26 12:54:53.628354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:35.993 spare 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.993 [2024-11-26 12:54:53.638207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.993 [2024-11-26 12:54:53.639977] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.993 [2024-11-26 12:54:53.640077] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:35.993 [2024-11-26 12:54:53.640096] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:35.993 [2024-11-26 12:54:53.640345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:35.993 [2024-11-26 12:54:53.640470] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:35.993 [2024-11-26 12:54:53.640486] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:35.993 [2024-11-26 12:54:53.640607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.993 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.994 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.994 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.994 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.994 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.994 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.994 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.994 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.994 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.252 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.252 "name": "raid_bdev1", 00:11:36.252 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:36.252 "strip_size_kb": 0, 00:11:36.252 "state": "online", 00:11:36.252 "raid_level": "raid1", 00:11:36.252 "superblock": false, 00:11:36.252 "num_base_bdevs": 2, 00:11:36.253 "num_base_bdevs_discovered": 2, 00:11:36.253 "num_base_bdevs_operational": 2, 00:11:36.253 "base_bdevs_list": [ 00:11:36.253 { 00:11:36.253 "name": "BaseBdev1", 00:11:36.253 "uuid": "5a03dd73-321c-5beb-a0f9-0bf76a6e0a40", 00:11:36.253 "is_configured": true, 00:11:36.253 "data_offset": 0, 00:11:36.253 "data_size": 65536 00:11:36.253 }, 00:11:36.253 { 00:11:36.253 "name": "BaseBdev2", 00:11:36.253 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:36.253 "is_configured": true, 00:11:36.253 "data_offset": 0, 00:11:36.253 "data_size": 65536 00:11:36.253 } 00:11:36.253 ] 00:11:36.253 }' 00:11:36.253 12:54:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.253 12:54:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:36.512 [2024-11-26 12:54:54.081656] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.512 [2024-11-26 12:54:54.165251] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.512 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.513 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.513 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.513 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.513 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.773 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.773 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.773 "name": "raid_bdev1", 00:11:36.773 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:36.773 "strip_size_kb": 0, 00:11:36.773 "state": "online", 00:11:36.773 "raid_level": "raid1", 00:11:36.773 "superblock": false, 00:11:36.773 "num_base_bdevs": 2, 00:11:36.773 "num_base_bdevs_discovered": 1, 00:11:36.773 "num_base_bdevs_operational": 1, 00:11:36.773 "base_bdevs_list": [ 00:11:36.773 { 00:11:36.773 "name": null, 00:11:36.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.773 "is_configured": false, 00:11:36.773 "data_offset": 0, 00:11:36.773 "data_size": 65536 00:11:36.773 }, 00:11:36.773 { 00:11:36.773 "name": "BaseBdev2", 00:11:36.773 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:36.773 "is_configured": true, 00:11:36.773 "data_offset": 0, 00:11:36.773 "data_size": 65536 00:11:36.773 } 00:11:36.773 ] 00:11:36.773 }' 00:11:36.773 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.773 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.773 [2024-11-26 12:54:54.231113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:36.773 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:36.773 Zero copy mechanism will not be used. 00:11:36.773 Running I/O for 60 seconds... 00:11:37.033 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:37.033 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.033 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.033 [2024-11-26 12:54:54.610891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:37.033 12:54:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.033 12:54:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:37.033 [2024-11-26 12:54:54.657144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:37.033 [2024-11-26 12:54:54.659103] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:37.292 [2024-11-26 12:54:54.772034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:37.292 [2024-11-26 12:54:54.772446] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:37.551 [2024-11-26 12:54:54.980269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:37.551 [2024-11-26 12:54:54.980578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:37.811 179.00 IOPS, 537.00 MiB/s [2024-11-26T12:54:55.495Z] [2024-11-26 12:54:55.303551] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:37.811 [2024-11-26 12:54:55.303974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:38.071 [2024-11-26 12:54:55.517652] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:38.071 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:38.071 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.071 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:38.071 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:38.071 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.071 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.071 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.071 12:54:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.072 12:54:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.072 12:54:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.072 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.072 "name": "raid_bdev1", 00:11:38.072 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:38.072 "strip_size_kb": 0, 00:11:38.072 "state": "online", 00:11:38.072 "raid_level": "raid1", 00:11:38.072 "superblock": false, 00:11:38.072 "num_base_bdevs": 2, 00:11:38.072 "num_base_bdevs_discovered": 2, 00:11:38.072 "num_base_bdevs_operational": 2, 00:11:38.072 "process": { 00:11:38.072 "type": "rebuild", 00:11:38.072 "target": "spare", 00:11:38.072 "progress": { 00:11:38.072 "blocks": 10240, 00:11:38.072 "percent": 15 00:11:38.072 } 00:11:38.072 }, 00:11:38.072 "base_bdevs_list": [ 00:11:38.072 { 00:11:38.072 "name": "spare", 00:11:38.072 "uuid": "071ec860-2131-578e-a661-0216e57d860e", 00:11:38.072 "is_configured": true, 00:11:38.072 "data_offset": 0, 00:11:38.072 "data_size": 65536 00:11:38.072 }, 00:11:38.072 { 00:11:38.072 "name": "BaseBdev2", 00:11:38.072 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:38.072 "is_configured": true, 00:11:38.072 "data_offset": 0, 00:11:38.072 "data_size": 65536 00:11:38.072 } 00:11:38.072 ] 00:11:38.072 }' 00:11:38.072 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.072 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:38.072 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.331 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:38.331 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:38.331 12:54:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.331 12:54:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.331 [2024-11-26 12:54:55.791032] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:38.331 [2024-11-26 12:54:55.963798] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:38.332 [2024-11-26 12:54:55.972498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.332 [2024-11-26 12:54:55.972536] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:38.332 [2024-11-26 12:54:55.972549] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:38.332 [2024-11-26 12:54:55.978748] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.332 12:54:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.592 12:54:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.592 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.592 "name": "raid_bdev1", 00:11:38.592 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:38.592 "strip_size_kb": 0, 00:11:38.592 "state": "online", 00:11:38.592 "raid_level": "raid1", 00:11:38.592 "superblock": false, 00:11:38.592 "num_base_bdevs": 2, 00:11:38.592 "num_base_bdevs_discovered": 1, 00:11:38.592 "num_base_bdevs_operational": 1, 00:11:38.592 "base_bdevs_list": [ 00:11:38.592 { 00:11:38.592 "name": null, 00:11:38.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.592 "is_configured": false, 00:11:38.592 "data_offset": 0, 00:11:38.592 "data_size": 65536 00:11:38.592 }, 00:11:38.592 { 00:11:38.592 "name": "BaseBdev2", 00:11:38.592 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:38.592 "is_configured": true, 00:11:38.592 "data_offset": 0, 00:11:38.592 "data_size": 65536 00:11:38.592 } 00:11:38.592 ] 00:11:38.592 }' 00:11:38.592 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.593 12:54:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.853 159.50 IOPS, 478.50 MiB/s [2024-11-26T12:54:56.537Z] 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.853 "name": "raid_bdev1", 00:11:38.853 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:38.853 "strip_size_kb": 0, 00:11:38.853 "state": "online", 00:11:38.853 "raid_level": "raid1", 00:11:38.853 "superblock": false, 00:11:38.853 "num_base_bdevs": 2, 00:11:38.853 "num_base_bdevs_discovered": 1, 00:11:38.853 "num_base_bdevs_operational": 1, 00:11:38.853 "base_bdevs_list": [ 00:11:38.853 { 00:11:38.853 "name": null, 00:11:38.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.853 "is_configured": false, 00:11:38.853 "data_offset": 0, 00:11:38.853 "data_size": 65536 00:11:38.853 }, 00:11:38.853 { 00:11:38.853 "name": "BaseBdev2", 00:11:38.853 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:38.853 "is_configured": true, 00:11:38.853 "data_offset": 0, 00:11:38.853 "data_size": 65536 00:11:38.853 } 00:11:38.853 ] 00:11:38.853 }' 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.853 12:54:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.853 [2024-11-26 12:54:56.517415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:39.113 12:54:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.113 12:54:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:39.113 [2024-11-26 12:54:56.550655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:39.113 [2024-11-26 12:54:56.552577] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:39.113 [2024-11-26 12:54:56.667511] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:39.113 [2024-11-26 12:54:56.667922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:39.388 [2024-11-26 12:54:56.898609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:39.388 [2024-11-26 12:54:56.898822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:39.651 176.33 IOPS, 529.00 MiB/s [2024-11-26T12:54:57.335Z] [2024-11-26 12:54:57.233516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:39.651 [2024-11-26 12:54:57.233912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:39.911 [2024-11-26 12:54:57.454541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:39.911 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.911 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.911 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.911 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.911 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.911 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.911 12:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.911 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.911 12:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.911 12:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.171 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.171 "name": "raid_bdev1", 00:11:40.171 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:40.171 "strip_size_kb": 0, 00:11:40.171 "state": "online", 00:11:40.171 "raid_level": "raid1", 00:11:40.171 "superblock": false, 00:11:40.171 "num_base_bdevs": 2, 00:11:40.171 "num_base_bdevs_discovered": 2, 00:11:40.171 "num_base_bdevs_operational": 2, 00:11:40.171 "process": { 00:11:40.171 "type": "rebuild", 00:11:40.171 "target": "spare", 00:11:40.171 "progress": { 00:11:40.171 "blocks": 10240, 00:11:40.171 "percent": 15 00:11:40.171 } 00:11:40.171 }, 00:11:40.171 "base_bdevs_list": [ 00:11:40.171 { 00:11:40.171 "name": "spare", 00:11:40.172 "uuid": "071ec860-2131-578e-a661-0216e57d860e", 00:11:40.172 "is_configured": true, 00:11:40.172 "data_offset": 0, 00:11:40.172 "data_size": 65536 00:11:40.172 }, 00:11:40.172 { 00:11:40.172 "name": "BaseBdev2", 00:11:40.172 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:40.172 "is_configured": true, 00:11:40.172 "data_offset": 0, 00:11:40.172 "data_size": 65536 00:11:40.172 } 00:11:40.172 ] 00:11:40.172 }' 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=321 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.172 "name": "raid_bdev1", 00:11:40.172 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:40.172 "strip_size_kb": 0, 00:11:40.172 "state": "online", 00:11:40.172 "raid_level": "raid1", 00:11:40.172 "superblock": false, 00:11:40.172 "num_base_bdevs": 2, 00:11:40.172 "num_base_bdevs_discovered": 2, 00:11:40.172 "num_base_bdevs_operational": 2, 00:11:40.172 "process": { 00:11:40.172 "type": "rebuild", 00:11:40.172 "target": "spare", 00:11:40.172 "progress": { 00:11:40.172 "blocks": 10240, 00:11:40.172 "percent": 15 00:11:40.172 } 00:11:40.172 }, 00:11:40.172 "base_bdevs_list": [ 00:11:40.172 { 00:11:40.172 "name": "spare", 00:11:40.172 "uuid": "071ec860-2131-578e-a661-0216e57d860e", 00:11:40.172 "is_configured": true, 00:11:40.172 "data_offset": 0, 00:11:40.172 "data_size": 65536 00:11:40.172 }, 00:11:40.172 { 00:11:40.172 "name": "BaseBdev2", 00:11:40.172 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:40.172 "is_configured": true, 00:11:40.172 "data_offset": 0, 00:11:40.172 "data_size": 65536 00:11:40.172 } 00:11:40.172 ] 00:11:40.172 }' 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.172 [2024-11-26 12:54:57.772977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:40.172 [2024-11-26 12:54:57.773504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.172 12:54:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:40.432 [2024-11-26 12:54:57.979736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:40.432 [2024-11-26 12:54:57.979894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:40.691 147.00 IOPS, 441.00 MiB/s [2024-11-26T12:54:58.375Z] [2024-11-26 12:54:58.315669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:40.950 [2024-11-26 12:54:58.531029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:41.210 [2024-11-26 12:54:58.767712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.210 "name": "raid_bdev1", 00:11:41.210 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:41.210 "strip_size_kb": 0, 00:11:41.210 "state": "online", 00:11:41.210 "raid_level": "raid1", 00:11:41.210 "superblock": false, 00:11:41.210 "num_base_bdevs": 2, 00:11:41.210 "num_base_bdevs_discovered": 2, 00:11:41.210 "num_base_bdevs_operational": 2, 00:11:41.210 "process": { 00:11:41.210 "type": "rebuild", 00:11:41.210 "target": "spare", 00:11:41.210 "progress": { 00:11:41.210 "blocks": 26624, 00:11:41.210 "percent": 40 00:11:41.210 } 00:11:41.210 }, 00:11:41.210 "base_bdevs_list": [ 00:11:41.210 { 00:11:41.210 "name": "spare", 00:11:41.210 "uuid": "071ec860-2131-578e-a661-0216e57d860e", 00:11:41.210 "is_configured": true, 00:11:41.210 "data_offset": 0, 00:11:41.210 "data_size": 65536 00:11:41.210 }, 00:11:41.210 { 00:11:41.210 "name": "BaseBdev2", 00:11:41.210 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:41.210 "is_configured": true, 00:11:41.210 "data_offset": 0, 00:11:41.210 "data_size": 65536 00:11:41.210 } 00:11:41.210 ] 00:11:41.210 }' 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.210 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.470 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.470 12:54:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:41.470 [2024-11-26 12:54:59.112714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:42.296 133.40 IOPS, 400.20 MiB/s [2024-11-26T12:54:59.980Z] [2024-11-26 12:54:59.759338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.296 12:54:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.296 [2024-11-26 12:54:59.971186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:42.555 12:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.555 "name": "raid_bdev1", 00:11:42.555 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:42.555 "strip_size_kb": 0, 00:11:42.555 "state": "online", 00:11:42.555 "raid_level": "raid1", 00:11:42.555 "superblock": false, 00:11:42.555 "num_base_bdevs": 2, 00:11:42.555 "num_base_bdevs_discovered": 2, 00:11:42.555 "num_base_bdevs_operational": 2, 00:11:42.555 "process": { 00:11:42.555 "type": "rebuild", 00:11:42.555 "target": "spare", 00:11:42.555 "progress": { 00:11:42.555 "blocks": 45056, 00:11:42.555 "percent": 68 00:11:42.555 } 00:11:42.555 }, 00:11:42.555 "base_bdevs_list": [ 00:11:42.555 { 00:11:42.555 "name": "spare", 00:11:42.555 "uuid": "071ec860-2131-578e-a661-0216e57d860e", 00:11:42.555 "is_configured": true, 00:11:42.555 "data_offset": 0, 00:11:42.555 "data_size": 65536 00:11:42.555 }, 00:11:42.555 { 00:11:42.555 "name": "BaseBdev2", 00:11:42.555 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:42.555 "is_configured": true, 00:11:42.555 "data_offset": 0, 00:11:42.555 "data_size": 65536 00:11:42.555 } 00:11:42.555 ] 00:11:42.555 }' 00:11:42.555 12:54:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.555 12:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:42.555 12:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.555 12:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:42.555 12:55:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:42.815 115.17 IOPS, 345.50 MiB/s [2024-11-26T12:55:00.499Z] [2024-11-26 12:55:00.290490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:43.074 [2024-11-26 12:55:00.511736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:43.333 [2024-11-26 12:55:00.925908] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:43.592 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.593 "name": "raid_bdev1", 00:11:43.593 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:43.593 "strip_size_kb": 0, 00:11:43.593 "state": "online", 00:11:43.593 "raid_level": "raid1", 00:11:43.593 "superblock": false, 00:11:43.593 "num_base_bdevs": 2, 00:11:43.593 "num_base_bdevs_discovered": 2, 00:11:43.593 "num_base_bdevs_operational": 2, 00:11:43.593 "process": { 00:11:43.593 "type": "rebuild", 00:11:43.593 "target": "spare", 00:11:43.593 "progress": { 00:11:43.593 "blocks": 59392, 00:11:43.593 "percent": 90 00:11:43.593 } 00:11:43.593 }, 00:11:43.593 "base_bdevs_list": [ 00:11:43.593 { 00:11:43.593 "name": "spare", 00:11:43.593 "uuid": "071ec860-2131-578e-a661-0216e57d860e", 00:11:43.593 "is_configured": true, 00:11:43.593 "data_offset": 0, 00:11:43.593 "data_size": 65536 00:11:43.593 }, 00:11:43.593 { 00:11:43.593 "name": "BaseBdev2", 00:11:43.593 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:43.593 "is_configured": true, 00:11:43.593 "data_offset": 0, 00:11:43.593 "data_size": 65536 00:11:43.593 } 00:11:43.593 ] 00:11:43.593 }' 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.593 12:55:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:43.879 105.43 IOPS, 316.29 MiB/s [2024-11-26T12:55:01.563Z] [2024-11-26 12:55:01.345817] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:43.879 [2024-11-26 12:55:01.445607] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:43.879 [2024-11-26 12:55:01.453807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.822 96.62 IOPS, 289.88 MiB/s [2024-11-26T12:55:02.506Z] 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.822 "name": "raid_bdev1", 00:11:44.822 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:44.822 "strip_size_kb": 0, 00:11:44.822 "state": "online", 00:11:44.822 "raid_level": "raid1", 00:11:44.822 "superblock": false, 00:11:44.822 "num_base_bdevs": 2, 00:11:44.822 "num_base_bdevs_discovered": 2, 00:11:44.822 "num_base_bdevs_operational": 2, 00:11:44.822 "base_bdevs_list": [ 00:11:44.822 { 00:11:44.822 "name": "spare", 00:11:44.822 "uuid": "071ec860-2131-578e-a661-0216e57d860e", 00:11:44.822 "is_configured": true, 00:11:44.822 "data_offset": 0, 00:11:44.822 "data_size": 65536 00:11:44.822 }, 00:11:44.822 { 00:11:44.822 "name": "BaseBdev2", 00:11:44.822 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:44.822 "is_configured": true, 00:11:44.822 "data_offset": 0, 00:11:44.822 "data_size": 65536 00:11:44.822 } 00:11:44.822 ] 00:11:44.822 }' 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.822 "name": "raid_bdev1", 00:11:44.822 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:44.822 "strip_size_kb": 0, 00:11:44.822 "state": "online", 00:11:44.822 "raid_level": "raid1", 00:11:44.822 "superblock": false, 00:11:44.822 "num_base_bdevs": 2, 00:11:44.822 "num_base_bdevs_discovered": 2, 00:11:44.822 "num_base_bdevs_operational": 2, 00:11:44.822 "base_bdevs_list": [ 00:11:44.822 { 00:11:44.822 "name": "spare", 00:11:44.822 "uuid": "071ec860-2131-578e-a661-0216e57d860e", 00:11:44.822 "is_configured": true, 00:11:44.822 "data_offset": 0, 00:11:44.822 "data_size": 65536 00:11:44.822 }, 00:11:44.822 { 00:11:44.822 "name": "BaseBdev2", 00:11:44.822 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:44.822 "is_configured": true, 00:11:44.822 "data_offset": 0, 00:11:44.822 "data_size": 65536 00:11:44.822 } 00:11:44.822 ] 00:11:44.822 }' 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.822 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.082 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.082 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.082 "name": "raid_bdev1", 00:11:45.082 "uuid": "6e774ecf-9968-451b-96ad-1909551c21ae", 00:11:45.082 "strip_size_kb": 0, 00:11:45.082 "state": "online", 00:11:45.082 "raid_level": "raid1", 00:11:45.082 "superblock": false, 00:11:45.082 "num_base_bdevs": 2, 00:11:45.082 "num_base_bdevs_discovered": 2, 00:11:45.082 "num_base_bdevs_operational": 2, 00:11:45.082 "base_bdevs_list": [ 00:11:45.082 { 00:11:45.082 "name": "spare", 00:11:45.082 "uuid": "071ec860-2131-578e-a661-0216e57d860e", 00:11:45.082 "is_configured": true, 00:11:45.082 "data_offset": 0, 00:11:45.082 "data_size": 65536 00:11:45.082 }, 00:11:45.082 { 00:11:45.082 "name": "BaseBdev2", 00:11:45.082 "uuid": "2653a8aa-c1b9-5fd8-b354-baef5aa9530e", 00:11:45.082 "is_configured": true, 00:11:45.082 "data_offset": 0, 00:11:45.082 "data_size": 65536 00:11:45.082 } 00:11:45.082 ] 00:11:45.082 }' 00:11:45.082 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.082 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.341 12:55:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.341 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.341 12:55:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.341 [2024-11-26 12:55:02.957175] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.341 [2024-11-26 12:55:02.957218] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.601 00:11:45.601 Latency(us) 00:11:45.601 [2024-11-26T12:55:03.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.601 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:45.601 raid_bdev1 : 8.84 90.26 270.79 0.00 0.00 14060.65 255.78 108520.75 00:11:45.601 [2024-11-26T12:55:03.285Z] =================================================================================================================== 00:11:45.601 [2024-11-26T12:55:03.285Z] Total : 90.26 270.79 0.00 0.00 14060.65 255.78 108520.75 00:11:45.601 [2024-11-26 12:55:03.060084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.601 [2024-11-26 12:55:03.060124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.601 [2024-11-26 12:55:03.060211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.601 [2024-11-26 12:55:03.060221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:45.601 { 00:11:45.601 "results": [ 00:11:45.601 { 00:11:45.601 "job": "raid_bdev1", 00:11:45.601 "core_mask": "0x1", 00:11:45.601 "workload": "randrw", 00:11:45.601 "percentage": 50, 00:11:45.601 "status": "finished", 00:11:45.601 "queue_depth": 2, 00:11:45.601 "io_size": 3145728, 00:11:45.601 "runtime": 8.840785, 00:11:45.601 "iops": 90.26347773416049, 00:11:45.601 "mibps": 270.7904332024815, 00:11:45.601 "io_failed": 0, 00:11:45.601 "io_timeout": 0, 00:11:45.601 "avg_latency_us": 14060.645973011131, 00:11:45.601 "min_latency_us": 255.7764192139738, 00:11:45.601 "max_latency_us": 108520.74759825328 00:11:45.601 } 00:11:45.601 ], 00:11:45.601 "core_count": 1 00:11:45.601 } 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:45.601 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:45.602 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:45.602 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:45.602 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:45.602 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:45.602 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:45.602 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:45.602 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:45.863 /dev/nbd0 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.863 1+0 records in 00:11:45.863 1+0 records out 00:11:45.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402096 s, 10.2 MB/s 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:45.863 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:46.122 /dev/nbd1 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.122 1+0 records in 00:11:46.122 1+0 records out 00:11:46.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255377 s, 16.0 MB/s 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.122 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.382 12:55:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:46.382 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:46.382 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:46.382 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:46.382 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.382 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.382 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87326 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87326 ']' 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87326 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87326 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87326' 00:11:46.642 killing process with pid 87326 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87326 00:11:46.642 Received shutdown signal, test time was about 9.897515 seconds 00:11:46.642 00:11:46.642 Latency(us) 00:11:46.642 [2024-11-26T12:55:04.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.642 [2024-11-26T12:55:04.326Z] =================================================================================================================== 00:11:46.642 [2024-11-26T12:55:04.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:46.642 [2024-11-26 12:55:04.111625] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.642 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87326 00:11:46.642 [2024-11-26 12:55:04.137920] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:46.902 ************************************ 00:11:46.902 00:11:46.902 real 0m11.775s 00:11:46.902 user 0m14.880s 00:11:46.902 sys 0m1.410s 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.902 END TEST raid_rebuild_test_io 00:11:46.902 ************************************ 00:11:46.902 12:55:04 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:46.902 12:55:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:46.902 12:55:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.902 12:55:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.902 ************************************ 00:11:46.902 START TEST raid_rebuild_test_sb_io 00:11:46.902 ************************************ 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87708 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87708 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87708 ']' 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:46.902 12:55:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.902 [2024-11-26 12:55:04.536704] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:46.902 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:46.902 Zero copy mechanism will not be used. 00:11:46.903 [2024-11-26 12:55:04.536939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87708 ] 00:11:47.162 [2024-11-26 12:55:04.696545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.162 [2024-11-26 12:55:04.743205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.162 [2024-11-26 12:55:04.786369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.162 [2024-11-26 12:55:04.786401] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.731 BaseBdev1_malloc 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.731 [2024-11-26 12:55:05.369373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:47.731 [2024-11-26 12:55:05.369463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.731 [2024-11-26 12:55:05.369488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:47.731 [2024-11-26 12:55:05.369503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.731 [2024-11-26 12:55:05.371604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.731 [2024-11-26 12:55:05.371640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:47.731 BaseBdev1 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.731 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.990 BaseBdev2_malloc 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.990 [2024-11-26 12:55:05.414933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:47.990 [2024-11-26 12:55:05.415040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.990 [2024-11-26 12:55:05.415088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:47.990 [2024-11-26 12:55:05.415109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.990 [2024-11-26 12:55:05.419871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.990 [2024-11-26 12:55:05.419941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:47.990 BaseBdev2 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.990 spare_malloc 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.990 spare_delay 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.990 [2024-11-26 12:55:05.458025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:47.990 [2024-11-26 12:55:05.458073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.990 [2024-11-26 12:55:05.458110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:47.990 [2024-11-26 12:55:05.458118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.990 [2024-11-26 12:55:05.460167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.990 [2024-11-26 12:55:05.460210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:47.990 spare 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.990 [2024-11-26 12:55:05.470056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.990 [2024-11-26 12:55:05.471872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.990 [2024-11-26 12:55:05.472064] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:47.990 [2024-11-26 12:55:05.472081] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.990 [2024-11-26 12:55:05.472323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:47.990 [2024-11-26 12:55:05.472439] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:47.990 [2024-11-26 12:55:05.472456] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:47.990 [2024-11-26 12:55:05.472566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.990 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.991 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.991 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.991 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.991 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.991 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.991 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.991 "name": "raid_bdev1", 00:11:47.991 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:47.991 "strip_size_kb": 0, 00:11:47.991 "state": "online", 00:11:47.991 "raid_level": "raid1", 00:11:47.991 "superblock": true, 00:11:47.991 "num_base_bdevs": 2, 00:11:47.991 "num_base_bdevs_discovered": 2, 00:11:47.991 "num_base_bdevs_operational": 2, 00:11:47.991 "base_bdevs_list": [ 00:11:47.991 { 00:11:47.991 "name": "BaseBdev1", 00:11:47.991 "uuid": "29adddeb-4f13-5db6-8695-89a12e9b2598", 00:11:47.991 "is_configured": true, 00:11:47.991 "data_offset": 2048, 00:11:47.991 "data_size": 63488 00:11:47.991 }, 00:11:47.991 { 00:11:47.991 "name": "BaseBdev2", 00:11:47.991 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:47.991 "is_configured": true, 00:11:47.991 "data_offset": 2048, 00:11:47.991 "data_size": 63488 00:11:47.991 } 00:11:47.991 ] 00:11:47.991 }' 00:11:47.991 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.991 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.558 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:48.558 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.558 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.558 12:55:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.558 [2024-11-26 12:55:05.981434] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:48.558 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.559 [2024-11-26 12:55:06.060964] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.559 "name": "raid_bdev1", 00:11:48.559 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:48.559 "strip_size_kb": 0, 00:11:48.559 "state": "online", 00:11:48.559 "raid_level": "raid1", 00:11:48.559 "superblock": true, 00:11:48.559 "num_base_bdevs": 2, 00:11:48.559 "num_base_bdevs_discovered": 1, 00:11:48.559 "num_base_bdevs_operational": 1, 00:11:48.559 "base_bdevs_list": [ 00:11:48.559 { 00:11:48.559 "name": null, 00:11:48.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.559 "is_configured": false, 00:11:48.559 "data_offset": 0, 00:11:48.559 "data_size": 63488 00:11:48.559 }, 00:11:48.559 { 00:11:48.559 "name": "BaseBdev2", 00:11:48.559 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:48.559 "is_configured": true, 00:11:48.559 "data_offset": 2048, 00:11:48.559 "data_size": 63488 00:11:48.559 } 00:11:48.559 ] 00:11:48.559 }' 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.559 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.559 [2024-11-26 12:55:06.154806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:48.559 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:48.559 Zero copy mechanism will not be used. 00:11:48.559 Running I/O for 60 seconds... 00:11:48.819 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:48.819 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.819 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.819 [2024-11-26 12:55:06.454558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:48.819 [2024-11-26 12:55:06.484848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:48.819 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.819 12:55:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:48.819 [2024-11-26 12:55:06.486862] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:49.079 [2024-11-26 12:55:06.610582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:49.079 [2024-11-26 12:55:06.610994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:49.338 [2024-11-26 12:55:06.818941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:49.338 [2024-11-26 12:55:06.819188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:49.597 [2024-11-26 12:55:07.147221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:49.597 161.00 IOPS, 483.00 MiB/s [2024-11-26T12:55:07.281Z] [2024-11-26 12:55:07.263165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:49.857 [2024-11-26 12:55:07.488709] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:49.857 [2024-11-26 12:55:07.489096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:49.857 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.857 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.857 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.857 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.857 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.857 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.857 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.857 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.857 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.857 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.117 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.117 "name": "raid_bdev1", 00:11:50.117 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:50.117 "strip_size_kb": 0, 00:11:50.117 "state": "online", 00:11:50.117 "raid_level": "raid1", 00:11:50.117 "superblock": true, 00:11:50.117 "num_base_bdevs": 2, 00:11:50.117 "num_base_bdevs_discovered": 2, 00:11:50.117 "num_base_bdevs_operational": 2, 00:11:50.117 "process": { 00:11:50.117 "type": "rebuild", 00:11:50.117 "target": "spare", 00:11:50.117 "progress": { 00:11:50.117 "blocks": 14336, 00:11:50.117 "percent": 22 00:11:50.117 } 00:11:50.117 }, 00:11:50.117 "base_bdevs_list": [ 00:11:50.117 { 00:11:50.117 "name": "spare", 00:11:50.117 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:50.117 "is_configured": true, 00:11:50.117 "data_offset": 2048, 00:11:50.117 "data_size": 63488 00:11:50.117 }, 00:11:50.117 { 00:11:50.117 "name": "BaseBdev2", 00:11:50.117 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:50.117 "is_configured": true, 00:11:50.117 "data_offset": 2048, 00:11:50.117 "data_size": 63488 00:11:50.117 } 00:11:50.117 ] 00:11:50.117 }' 00:11:50.117 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.117 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:50.117 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.117 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.117 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:50.117 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.117 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.117 [2024-11-26 12:55:07.649227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.117 [2024-11-26 12:55:07.697092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:50.376 [2024-11-26 12:55:07.803668] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:50.376 [2024-11-26 12:55:07.817065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.376 [2024-11-26 12:55:07.817101] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.376 [2024-11-26 12:55:07.817118] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:50.376 [2024-11-26 12:55:07.829010] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.376 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.376 "name": "raid_bdev1", 00:11:50.376 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:50.376 "strip_size_kb": 0, 00:11:50.376 "state": "online", 00:11:50.376 "raid_level": "raid1", 00:11:50.376 "superblock": true, 00:11:50.376 "num_base_bdevs": 2, 00:11:50.376 "num_base_bdevs_discovered": 1, 00:11:50.376 "num_base_bdevs_operational": 1, 00:11:50.376 "base_bdevs_list": [ 00:11:50.376 { 00:11:50.376 "name": null, 00:11:50.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.376 "is_configured": false, 00:11:50.376 "data_offset": 0, 00:11:50.376 "data_size": 63488 00:11:50.376 }, 00:11:50.376 { 00:11:50.376 "name": "BaseBdev2", 00:11:50.376 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:50.376 "is_configured": true, 00:11:50.377 "data_offset": 2048, 00:11:50.377 "data_size": 63488 00:11:50.377 } 00:11:50.377 ] 00:11:50.377 }' 00:11:50.377 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.377 12:55:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.636 170.00 IOPS, 510.00 MiB/s [2024-11-26T12:55:08.320Z] 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.636 "name": "raid_bdev1", 00:11:50.636 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:50.636 "strip_size_kb": 0, 00:11:50.636 "state": "online", 00:11:50.636 "raid_level": "raid1", 00:11:50.636 "superblock": true, 00:11:50.636 "num_base_bdevs": 2, 00:11:50.636 "num_base_bdevs_discovered": 1, 00:11:50.636 "num_base_bdevs_operational": 1, 00:11:50.636 "base_bdevs_list": [ 00:11:50.636 { 00:11:50.636 "name": null, 00:11:50.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.636 "is_configured": false, 00:11:50.636 "data_offset": 0, 00:11:50.636 "data_size": 63488 00:11:50.636 }, 00:11:50.636 { 00:11:50.636 "name": "BaseBdev2", 00:11:50.636 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:50.636 "is_configured": true, 00:11:50.636 "data_offset": 2048, 00:11:50.636 "data_size": 63488 00:11:50.636 } 00:11:50.636 ] 00:11:50.636 }' 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:50.636 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.896 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:50.896 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:50.896 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.896 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.896 [2024-11-26 12:55:08.353893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:50.896 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.896 12:55:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:50.896 [2024-11-26 12:55:08.390056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:50.896 [2024-11-26 12:55:08.391933] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:50.896 [2024-11-26 12:55:08.504508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:50.896 [2024-11-26 12:55:08.504785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:51.155 [2024-11-26 12:55:08.621971] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:51.155 [2024-11-26 12:55:08.622104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:51.415 [2024-11-26 12:55:08.966261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:51.415 [2024-11-26 12:55:08.971902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:51.674 156.67 IOPS, 470.00 MiB/s [2024-11-26T12:55:09.358Z] [2024-11-26 12:55:09.190366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:51.674 [2024-11-26 12:55:09.190636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.933 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.933 "name": "raid_bdev1", 00:11:51.933 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:51.933 "strip_size_kb": 0, 00:11:51.933 "state": "online", 00:11:51.933 "raid_level": "raid1", 00:11:51.933 "superblock": true, 00:11:51.933 "num_base_bdevs": 2, 00:11:51.933 "num_base_bdevs_discovered": 2, 00:11:51.933 "num_base_bdevs_operational": 2, 00:11:51.933 "process": { 00:11:51.933 "type": "rebuild", 00:11:51.933 "target": "spare", 00:11:51.933 "progress": { 00:11:51.933 "blocks": 12288, 00:11:51.933 "percent": 19 00:11:51.933 } 00:11:51.933 }, 00:11:51.933 "base_bdevs_list": [ 00:11:51.933 { 00:11:51.933 "name": "spare", 00:11:51.933 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:51.933 "is_configured": true, 00:11:51.933 "data_offset": 2048, 00:11:51.933 "data_size": 63488 00:11:51.933 }, 00:11:51.933 { 00:11:51.933 "name": "BaseBdev2", 00:11:51.933 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:51.933 "is_configured": true, 00:11:51.933 "data_offset": 2048, 00:11:51.933 "data_size": 63488 00:11:51.933 } 00:11:51.933 ] 00:11:51.933 }' 00:11:51.933 [2024-11-26 12:55:09.427725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:51.934 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=333 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.934 [2024-11-26 12:55:09.551992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.934 "name": "raid_bdev1", 00:11:51.934 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:51.934 "strip_size_kb": 0, 00:11:51.934 "state": "online", 00:11:51.934 "raid_level": "raid1", 00:11:51.934 "superblock": true, 00:11:51.934 "num_base_bdevs": 2, 00:11:51.934 "num_base_bdevs_discovered": 2, 00:11:51.934 "num_base_bdevs_operational": 2, 00:11:51.934 "process": { 00:11:51.934 "type": "rebuild", 00:11:51.934 "target": "spare", 00:11:51.934 "progress": { 00:11:51.934 "blocks": 14336, 00:11:51.934 "percent": 22 00:11:51.934 } 00:11:51.934 }, 00:11:51.934 "base_bdevs_list": [ 00:11:51.934 { 00:11:51.934 "name": "spare", 00:11:51.934 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:51.934 "is_configured": true, 00:11:51.934 "data_offset": 2048, 00:11:51.934 "data_size": 63488 00:11:51.934 }, 00:11:51.934 { 00:11:51.934 "name": "BaseBdev2", 00:11:51.934 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:51.934 "is_configured": true, 00:11:51.934 "data_offset": 2048, 00:11:51.934 "data_size": 63488 00:11:51.934 } 00:11:51.934 ] 00:11:51.934 }' 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.934 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.194 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.194 12:55:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:52.454 [2024-11-26 12:55:09.873933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:52.454 [2024-11-26 12:55:09.874460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:52.454 [2024-11-26 12:55:10.098793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:52.973 133.50 IOPS, 400.50 MiB/s [2024-11-26T12:55:10.657Z] 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:52.973 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.973 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.973 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.973 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.973 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.233 "name": "raid_bdev1", 00:11:53.233 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:53.233 "strip_size_kb": 0, 00:11:53.233 "state": "online", 00:11:53.233 "raid_level": "raid1", 00:11:53.233 "superblock": true, 00:11:53.233 "num_base_bdevs": 2, 00:11:53.233 "num_base_bdevs_discovered": 2, 00:11:53.233 "num_base_bdevs_operational": 2, 00:11:53.233 "process": { 00:11:53.233 "type": "rebuild", 00:11:53.233 "target": "spare", 00:11:53.233 "progress": { 00:11:53.233 "blocks": 30720, 00:11:53.233 "percent": 48 00:11:53.233 } 00:11:53.233 }, 00:11:53.233 "base_bdevs_list": [ 00:11:53.233 { 00:11:53.233 "name": "spare", 00:11:53.233 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:53.233 "is_configured": true, 00:11:53.233 "data_offset": 2048, 00:11:53.233 "data_size": 63488 00:11:53.233 }, 00:11:53.233 { 00:11:53.233 "name": "BaseBdev2", 00:11:53.233 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:53.233 "is_configured": true, 00:11:53.233 "data_offset": 2048, 00:11:53.233 "data_size": 63488 00:11:53.233 } 00:11:53.233 ] 00:11:53.233 }' 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.233 12:55:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:53.493 [2024-11-26 12:55:11.021474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:53.493 [2024-11-26 12:55:11.021790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:53.493 [2024-11-26 12:55:11.134465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:54.068 117.40 IOPS, 352.20 MiB/s [2024-11-26T12:55:11.752Z] [2024-11-26 12:55:11.537640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.347 "name": "raid_bdev1", 00:11:54.347 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:54.347 "strip_size_kb": 0, 00:11:54.347 "state": "online", 00:11:54.347 "raid_level": "raid1", 00:11:54.347 "superblock": true, 00:11:54.347 "num_base_bdevs": 2, 00:11:54.347 "num_base_bdevs_discovered": 2, 00:11:54.347 "num_base_bdevs_operational": 2, 00:11:54.347 "process": { 00:11:54.347 "type": "rebuild", 00:11:54.347 "target": "spare", 00:11:54.347 "progress": { 00:11:54.347 "blocks": 49152, 00:11:54.347 "percent": 77 00:11:54.347 } 00:11:54.347 }, 00:11:54.347 "base_bdevs_list": [ 00:11:54.347 { 00:11:54.347 "name": "spare", 00:11:54.347 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:54.347 "is_configured": true, 00:11:54.347 "data_offset": 2048, 00:11:54.347 "data_size": 63488 00:11:54.347 }, 00:11:54.347 { 00:11:54.347 "name": "BaseBdev2", 00:11:54.347 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:54.347 "is_configured": true, 00:11:54.347 "data_offset": 2048, 00:11:54.347 "data_size": 63488 00:11:54.347 } 00:11:54.347 ] 00:11:54.347 }' 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.347 [2024-11-26 12:55:11.864454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:54.347 12:55:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:54.607 [2024-11-26 12:55:12.084277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:54.866 103.83 IOPS, 311.50 MiB/s [2024-11-26T12:55:12.550Z] [2024-11-26 12:55:12.397024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:55.126 [2024-11-26 12:55:12.617854] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:55.126 [2024-11-26 12:55:12.722708] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:55.126 [2024-11-26 12:55:12.724424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.386 "name": "raid_bdev1", 00:11:55.386 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:55.386 "strip_size_kb": 0, 00:11:55.386 "state": "online", 00:11:55.386 "raid_level": "raid1", 00:11:55.386 "superblock": true, 00:11:55.386 "num_base_bdevs": 2, 00:11:55.386 "num_base_bdevs_discovered": 2, 00:11:55.386 "num_base_bdevs_operational": 2, 00:11:55.386 "base_bdevs_list": [ 00:11:55.386 { 00:11:55.386 "name": "spare", 00:11:55.386 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:55.386 "is_configured": true, 00:11:55.386 "data_offset": 2048, 00:11:55.386 "data_size": 63488 00:11:55.386 }, 00:11:55.386 { 00:11:55.386 "name": "BaseBdev2", 00:11:55.386 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:55.386 "is_configured": true, 00:11:55.386 "data_offset": 2048, 00:11:55.386 "data_size": 63488 00:11:55.386 } 00:11:55.386 ] 00:11:55.386 }' 00:11:55.386 12:55:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.386 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:55.386 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.646 "name": "raid_bdev1", 00:11:55.646 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:55.646 "strip_size_kb": 0, 00:11:55.646 "state": "online", 00:11:55.646 "raid_level": "raid1", 00:11:55.646 "superblock": true, 00:11:55.646 "num_base_bdevs": 2, 00:11:55.646 "num_base_bdevs_discovered": 2, 00:11:55.646 "num_base_bdevs_operational": 2, 00:11:55.646 "base_bdevs_list": [ 00:11:55.646 { 00:11:55.646 "name": "spare", 00:11:55.646 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:55.646 "is_configured": true, 00:11:55.646 "data_offset": 2048, 00:11:55.646 "data_size": 63488 00:11:55.646 }, 00:11:55.646 { 00:11:55.646 "name": "BaseBdev2", 00:11:55.646 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:55.646 "is_configured": true, 00:11:55.646 "data_offset": 2048, 00:11:55.646 "data_size": 63488 00:11:55.646 } 00:11:55.646 ] 00:11:55.646 }' 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.646 94.29 IOPS, 282.86 MiB/s [2024-11-26T12:55:13.330Z] 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.646 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.647 "name": "raid_bdev1", 00:11:55.647 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:55.647 "strip_size_kb": 0, 00:11:55.647 "state": "online", 00:11:55.647 "raid_level": "raid1", 00:11:55.647 "superblock": true, 00:11:55.647 "num_base_bdevs": 2, 00:11:55.647 "num_base_bdevs_discovered": 2, 00:11:55.647 "num_base_bdevs_operational": 2, 00:11:55.647 "base_bdevs_list": [ 00:11:55.647 { 00:11:55.647 "name": "spare", 00:11:55.647 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:55.647 "is_configured": true, 00:11:55.647 "data_offset": 2048, 00:11:55.647 "data_size": 63488 00:11:55.647 }, 00:11:55.647 { 00:11:55.647 "name": "BaseBdev2", 00:11:55.647 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:55.647 "is_configured": true, 00:11:55.647 "data_offset": 2048, 00:11:55.647 "data_size": 63488 00:11:55.647 } 00:11:55.647 ] 00:11:55.647 }' 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.647 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.906 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.906 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.906 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.906 [2024-11-26 12:55:13.546924] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.906 [2024-11-26 12:55:13.546953] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.166 00:11:56.166 Latency(us) 00:11:56.167 [2024-11-26T12:55:13.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.167 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:56.167 raid_bdev1 : 7.44 90.72 272.17 0.00 0.00 14505.32 277.24 111726.00 00:11:56.167 [2024-11-26T12:55:13.851Z] =================================================================================================================== 00:11:56.167 [2024-11-26T12:55:13.851Z] Total : 90.72 272.17 0.00 0.00 14505.32 277.24 111726.00 00:11:56.167 [2024-11-26 12:55:13.586621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.167 { 00:11:56.167 "results": [ 00:11:56.167 { 00:11:56.167 "job": "raid_bdev1", 00:11:56.167 "core_mask": "0x1", 00:11:56.167 "workload": "randrw", 00:11:56.167 "percentage": 50, 00:11:56.167 "status": "finished", 00:11:56.167 "queue_depth": 2, 00:11:56.167 "io_size": 3145728, 00:11:56.167 "runtime": 7.440259, 00:11:56.167 "iops": 90.7226482303909, 00:11:56.167 "mibps": 272.1679446911727, 00:11:56.167 "io_failed": 0, 00:11:56.167 "io_timeout": 0, 00:11:56.167 "avg_latency_us": 14505.322532104155, 00:11:56.167 "min_latency_us": 277.2401746724891, 00:11:56.167 "max_latency_us": 111726.00174672488 00:11:56.167 } 00:11:56.167 ], 00:11:56.167 "core_count": 1 00:11:56.167 } 00:11:56.167 [2024-11-26 12:55:13.586709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.167 [2024-11-26 12:55:13.586806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.167 [2024-11-26 12:55:13.586818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:56.167 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:56.167 /dev/nbd0 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.427 1+0 records in 00:11:56.427 1+0 records out 00:11:56.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569242 s, 7.2 MB/s 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:56.427 12:55:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:56.427 /dev/nbd1 00:11:56.427 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:56.427 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:56.427 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:56.427 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:56.427 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.687 1+0 records in 00:11:56.687 1+0 records out 00:11:56.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404819 s, 10.1 MB/s 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:56.687 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.946 [2024-11-26 12:55:14.607229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:56.946 [2024-11-26 12:55:14.607326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.946 [2024-11-26 12:55:14.607373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:56.946 [2024-11-26 12:55:14.607404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.946 [2024-11-26 12:55:14.609618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.946 [2024-11-26 12:55:14.609689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:56.946 [2024-11-26 12:55:14.609794] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:56.946 [2024-11-26 12:55:14.609867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:56.946 [2024-11-26 12:55:14.610019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.946 spare 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.946 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.205 [2024-11-26 12:55:14.709973] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:57.205 [2024-11-26 12:55:14.710043] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.205 [2024-11-26 12:55:14.710341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:11:57.205 [2024-11-26 12:55:14.710521] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:57.205 [2024-11-26 12:55:14.710565] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:57.205 [2024-11-26 12:55:14.710737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.205 "name": "raid_bdev1", 00:11:57.205 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:57.205 "strip_size_kb": 0, 00:11:57.205 "state": "online", 00:11:57.205 "raid_level": "raid1", 00:11:57.205 "superblock": true, 00:11:57.205 "num_base_bdevs": 2, 00:11:57.205 "num_base_bdevs_discovered": 2, 00:11:57.205 "num_base_bdevs_operational": 2, 00:11:57.205 "base_bdevs_list": [ 00:11:57.205 { 00:11:57.205 "name": "spare", 00:11:57.205 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:57.205 "is_configured": true, 00:11:57.205 "data_offset": 2048, 00:11:57.205 "data_size": 63488 00:11:57.205 }, 00:11:57.205 { 00:11:57.205 "name": "BaseBdev2", 00:11:57.205 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:57.205 "is_configured": true, 00:11:57.205 "data_offset": 2048, 00:11:57.205 "data_size": 63488 00:11:57.205 } 00:11:57.205 ] 00:11:57.205 }' 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.205 12:55:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.772 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:57.772 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.772 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:57.772 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:57.772 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.772 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.772 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.772 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.772 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.773 "name": "raid_bdev1", 00:11:57.773 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:57.773 "strip_size_kb": 0, 00:11:57.773 "state": "online", 00:11:57.773 "raid_level": "raid1", 00:11:57.773 "superblock": true, 00:11:57.773 "num_base_bdevs": 2, 00:11:57.773 "num_base_bdevs_discovered": 2, 00:11:57.773 "num_base_bdevs_operational": 2, 00:11:57.773 "base_bdevs_list": [ 00:11:57.773 { 00:11:57.773 "name": "spare", 00:11:57.773 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:57.773 "is_configured": true, 00:11:57.773 "data_offset": 2048, 00:11:57.773 "data_size": 63488 00:11:57.773 }, 00:11:57.773 { 00:11:57.773 "name": "BaseBdev2", 00:11:57.773 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:57.773 "is_configured": true, 00:11:57.773 "data_offset": 2048, 00:11:57.773 "data_size": 63488 00:11:57.773 } 00:11:57.773 ] 00:11:57.773 }' 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.773 [2024-11-26 12:55:15.369972] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.773 "name": "raid_bdev1", 00:11:57.773 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:57.773 "strip_size_kb": 0, 00:11:57.773 "state": "online", 00:11:57.773 "raid_level": "raid1", 00:11:57.773 "superblock": true, 00:11:57.773 "num_base_bdevs": 2, 00:11:57.773 "num_base_bdevs_discovered": 1, 00:11:57.773 "num_base_bdevs_operational": 1, 00:11:57.773 "base_bdevs_list": [ 00:11:57.773 { 00:11:57.773 "name": null, 00:11:57.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.773 "is_configured": false, 00:11:57.773 "data_offset": 0, 00:11:57.773 "data_size": 63488 00:11:57.773 }, 00:11:57.773 { 00:11:57.773 "name": "BaseBdev2", 00:11:57.773 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:57.773 "is_configured": true, 00:11:57.773 "data_offset": 2048, 00:11:57.773 "data_size": 63488 00:11:57.773 } 00:11:57.773 ] 00:11:57.773 }' 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.773 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.342 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:58.342 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.342 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.342 [2024-11-26 12:55:15.765348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:58.342 [2024-11-26 12:55:15.765567] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:58.342 [2024-11-26 12:55:15.765626] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:58.342 [2024-11-26 12:55:15.765725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:58.342 [2024-11-26 12:55:15.770262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:11:58.342 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.342 12:55:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:58.342 [2024-11-26 12:55:15.772146] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.284 "name": "raid_bdev1", 00:11:59.284 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:59.284 "strip_size_kb": 0, 00:11:59.284 "state": "online", 00:11:59.284 "raid_level": "raid1", 00:11:59.284 "superblock": true, 00:11:59.284 "num_base_bdevs": 2, 00:11:59.284 "num_base_bdevs_discovered": 2, 00:11:59.284 "num_base_bdevs_operational": 2, 00:11:59.284 "process": { 00:11:59.284 "type": "rebuild", 00:11:59.284 "target": "spare", 00:11:59.284 "progress": { 00:11:59.284 "blocks": 20480, 00:11:59.284 "percent": 32 00:11:59.284 } 00:11:59.284 }, 00:11:59.284 "base_bdevs_list": [ 00:11:59.284 { 00:11:59.284 "name": "spare", 00:11:59.284 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:11:59.284 "is_configured": true, 00:11:59.284 "data_offset": 2048, 00:11:59.284 "data_size": 63488 00:11:59.284 }, 00:11:59.284 { 00:11:59.284 "name": "BaseBdev2", 00:11:59.284 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:59.284 "is_configured": true, 00:11:59.284 "data_offset": 2048, 00:11:59.284 "data_size": 63488 00:11:59.284 } 00:11:59.284 ] 00:11:59.284 }' 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.284 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.284 [2024-11-26 12:55:16.908380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:59.544 [2024-11-26 12:55:16.976176] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:59.544 [2024-11-26 12:55:16.976315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.544 [2024-11-26 12:55:16.976351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:59.544 [2024-11-26 12:55:16.976374] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.544 12:55:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.544 12:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.544 12:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.544 "name": "raid_bdev1", 00:11:59.544 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:11:59.544 "strip_size_kb": 0, 00:11:59.544 "state": "online", 00:11:59.544 "raid_level": "raid1", 00:11:59.544 "superblock": true, 00:11:59.544 "num_base_bdevs": 2, 00:11:59.544 "num_base_bdevs_discovered": 1, 00:11:59.544 "num_base_bdevs_operational": 1, 00:11:59.544 "base_bdevs_list": [ 00:11:59.544 { 00:11:59.544 "name": null, 00:11:59.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.544 "is_configured": false, 00:11:59.544 "data_offset": 0, 00:11:59.544 "data_size": 63488 00:11:59.544 }, 00:11:59.544 { 00:11:59.544 "name": "BaseBdev2", 00:11:59.544 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:11:59.544 "is_configured": true, 00:11:59.544 "data_offset": 2048, 00:11:59.544 "data_size": 63488 00:11:59.544 } 00:11:59.544 ] 00:11:59.544 }' 00:11:59.544 12:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.544 12:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.804 12:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:59.804 12:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.804 12:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.804 [2024-11-26 12:55:17.432063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:59.804 [2024-11-26 12:55:17.432167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.804 [2024-11-26 12:55:17.432213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:59.804 [2024-11-26 12:55:17.432242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.804 [2024-11-26 12:55:17.432710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.804 [2024-11-26 12:55:17.432772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:59.804 [2024-11-26 12:55:17.432876] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:59.804 [2024-11-26 12:55:17.432920] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:59.804 [2024-11-26 12:55:17.432964] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:59.804 [2024-11-26 12:55:17.433041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.804 [2024-11-26 12:55:17.437284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:11:59.804 spare 00:11:59.804 12:55:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.804 12:55:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:59.804 [2024-11-26 12:55:17.439150] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.185 "name": "raid_bdev1", 00:12:01.185 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:12:01.185 "strip_size_kb": 0, 00:12:01.185 "state": "online", 00:12:01.185 "raid_level": "raid1", 00:12:01.185 "superblock": true, 00:12:01.185 "num_base_bdevs": 2, 00:12:01.185 "num_base_bdevs_discovered": 2, 00:12:01.185 "num_base_bdevs_operational": 2, 00:12:01.185 "process": { 00:12:01.185 "type": "rebuild", 00:12:01.185 "target": "spare", 00:12:01.185 "progress": { 00:12:01.185 "blocks": 20480, 00:12:01.185 "percent": 32 00:12:01.185 } 00:12:01.185 }, 00:12:01.185 "base_bdevs_list": [ 00:12:01.185 { 00:12:01.185 "name": "spare", 00:12:01.185 "uuid": "bd74b4ee-183c-5cc8-8a7e-f530b6a353c9", 00:12:01.185 "is_configured": true, 00:12:01.185 "data_offset": 2048, 00:12:01.185 "data_size": 63488 00:12:01.185 }, 00:12:01.185 { 00:12:01.185 "name": "BaseBdev2", 00:12:01.185 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:12:01.185 "is_configured": true, 00:12:01.185 "data_offset": 2048, 00:12:01.185 "data_size": 63488 00:12:01.185 } 00:12:01.185 ] 00:12:01.185 }' 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.185 [2024-11-26 12:55:18.579357] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.185 [2024-11-26 12:55:18.643092] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:01.185 [2024-11-26 12:55:18.643173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.185 [2024-11-26 12:55:18.643250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.185 [2024-11-26 12:55:18.643273] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.185 "name": "raid_bdev1", 00:12:01.185 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:12:01.185 "strip_size_kb": 0, 00:12:01.185 "state": "online", 00:12:01.185 "raid_level": "raid1", 00:12:01.185 "superblock": true, 00:12:01.185 "num_base_bdevs": 2, 00:12:01.185 "num_base_bdevs_discovered": 1, 00:12:01.185 "num_base_bdevs_operational": 1, 00:12:01.185 "base_bdevs_list": [ 00:12:01.185 { 00:12:01.185 "name": null, 00:12:01.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.185 "is_configured": false, 00:12:01.185 "data_offset": 0, 00:12:01.185 "data_size": 63488 00:12:01.185 }, 00:12:01.185 { 00:12:01.185 "name": "BaseBdev2", 00:12:01.185 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:12:01.185 "is_configured": true, 00:12:01.185 "data_offset": 2048, 00:12:01.185 "data_size": 63488 00:12:01.185 } 00:12:01.185 ] 00:12:01.185 }' 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.185 12:55:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.755 "name": "raid_bdev1", 00:12:01.755 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:12:01.755 "strip_size_kb": 0, 00:12:01.755 "state": "online", 00:12:01.755 "raid_level": "raid1", 00:12:01.755 "superblock": true, 00:12:01.755 "num_base_bdevs": 2, 00:12:01.755 "num_base_bdevs_discovered": 1, 00:12:01.755 "num_base_bdevs_operational": 1, 00:12:01.755 "base_bdevs_list": [ 00:12:01.755 { 00:12:01.755 "name": null, 00:12:01.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.755 "is_configured": false, 00:12:01.755 "data_offset": 0, 00:12:01.755 "data_size": 63488 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "name": "BaseBdev2", 00:12:01.755 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:12:01.755 "is_configured": true, 00:12:01.755 "data_offset": 2048, 00:12:01.755 "data_size": 63488 00:12:01.755 } 00:12:01.755 ] 00:12:01.755 }' 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.755 [2024-11-26 12:55:19.302465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:01.755 [2024-11-26 12:55:19.302516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.755 [2024-11-26 12:55:19.302537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:01.755 [2024-11-26 12:55:19.302545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.755 [2024-11-26 12:55:19.302924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.755 [2024-11-26 12:55:19.302940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:01.755 [2024-11-26 12:55:19.303005] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:01.755 [2024-11-26 12:55:19.303024] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:01.755 [2024-11-26 12:55:19.303040] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:01.755 [2024-11-26 12:55:19.303049] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:01.755 BaseBdev1 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.755 12:55:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.693 "name": "raid_bdev1", 00:12:02.693 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:12:02.693 "strip_size_kb": 0, 00:12:02.693 "state": "online", 00:12:02.693 "raid_level": "raid1", 00:12:02.693 "superblock": true, 00:12:02.693 "num_base_bdevs": 2, 00:12:02.693 "num_base_bdevs_discovered": 1, 00:12:02.693 "num_base_bdevs_operational": 1, 00:12:02.693 "base_bdevs_list": [ 00:12:02.693 { 00:12:02.693 "name": null, 00:12:02.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.693 "is_configured": false, 00:12:02.693 "data_offset": 0, 00:12:02.693 "data_size": 63488 00:12:02.693 }, 00:12:02.693 { 00:12:02.693 "name": "BaseBdev2", 00:12:02.693 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:12:02.693 "is_configured": true, 00:12:02.693 "data_offset": 2048, 00:12:02.693 "data_size": 63488 00:12:02.693 } 00:12:02.693 ] 00:12:02.693 }' 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.693 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.261 "name": "raid_bdev1", 00:12:03.261 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:12:03.261 "strip_size_kb": 0, 00:12:03.261 "state": "online", 00:12:03.261 "raid_level": "raid1", 00:12:03.261 "superblock": true, 00:12:03.261 "num_base_bdevs": 2, 00:12:03.261 "num_base_bdevs_discovered": 1, 00:12:03.261 "num_base_bdevs_operational": 1, 00:12:03.261 "base_bdevs_list": [ 00:12:03.261 { 00:12:03.261 "name": null, 00:12:03.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.261 "is_configured": false, 00:12:03.261 "data_offset": 0, 00:12:03.261 "data_size": 63488 00:12:03.261 }, 00:12:03.261 { 00:12:03.261 "name": "BaseBdev2", 00:12:03.261 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:12:03.261 "is_configured": true, 00:12:03.261 "data_offset": 2048, 00:12:03.261 "data_size": 63488 00:12:03.261 } 00:12:03.261 ] 00:12:03.261 }' 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:03.261 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.262 [2024-11-26 12:55:20.903964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.262 [2024-11-26 12:55:20.904117] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:03.262 [2024-11-26 12:55:20.904133] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:03.262 request: 00:12:03.262 { 00:12:03.262 "base_bdev": "BaseBdev1", 00:12:03.262 "raid_bdev": "raid_bdev1", 00:12:03.262 "method": "bdev_raid_add_base_bdev", 00:12:03.262 "req_id": 1 00:12:03.262 } 00:12:03.262 Got JSON-RPC error response 00:12:03.262 response: 00:12:03.262 { 00:12:03.262 "code": -22, 00:12:03.262 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:03.262 } 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:03.262 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.641 "name": "raid_bdev1", 00:12:04.641 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:12:04.641 "strip_size_kb": 0, 00:12:04.641 "state": "online", 00:12:04.641 "raid_level": "raid1", 00:12:04.641 "superblock": true, 00:12:04.641 "num_base_bdevs": 2, 00:12:04.641 "num_base_bdevs_discovered": 1, 00:12:04.641 "num_base_bdevs_operational": 1, 00:12:04.641 "base_bdevs_list": [ 00:12:04.641 { 00:12:04.641 "name": null, 00:12:04.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.641 "is_configured": false, 00:12:04.641 "data_offset": 0, 00:12:04.641 "data_size": 63488 00:12:04.641 }, 00:12:04.641 { 00:12:04.641 "name": "BaseBdev2", 00:12:04.641 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:12:04.641 "is_configured": true, 00:12:04.641 "data_offset": 2048, 00:12:04.641 "data_size": 63488 00:12:04.641 } 00:12:04.641 ] 00:12:04.641 }' 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.641 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.901 "name": "raid_bdev1", 00:12:04.901 "uuid": "d39a49bb-74ce-4dc8-b01a-92109e3146cc", 00:12:04.901 "strip_size_kb": 0, 00:12:04.901 "state": "online", 00:12:04.901 "raid_level": "raid1", 00:12:04.901 "superblock": true, 00:12:04.901 "num_base_bdevs": 2, 00:12:04.901 "num_base_bdevs_discovered": 1, 00:12:04.901 "num_base_bdevs_operational": 1, 00:12:04.901 "base_bdevs_list": [ 00:12:04.901 { 00:12:04.901 "name": null, 00:12:04.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.901 "is_configured": false, 00:12:04.901 "data_offset": 0, 00:12:04.901 "data_size": 63488 00:12:04.901 }, 00:12:04.901 { 00:12:04.901 "name": "BaseBdev2", 00:12:04.901 "uuid": "438df289-e82d-596b-937b-2e9e09bf45ca", 00:12:04.901 "is_configured": true, 00:12:04.901 "data_offset": 2048, 00:12:04.901 "data_size": 63488 00:12:04.901 } 00:12:04.901 ] 00:12:04.901 }' 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87708 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87708 ']' 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87708 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87708 00:12:04.901 killing process with pid 87708 00:12:04.901 Received shutdown signal, test time was about 16.372997 seconds 00:12:04.901 00:12:04.901 Latency(us) 00:12:04.901 [2024-11-26T12:55:22.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.901 [2024-11-26T12:55:22.585Z] =================================================================================================================== 00:12:04.901 [2024-11-26T12:55:22.585Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87708' 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87708 00:12:04.901 [2024-11-26 12:55:22.498338] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:04.901 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87708 00:12:04.901 [2024-11-26 12:55:22.498477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.901 [2024-11-26 12:55:22.498536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.901 [2024-11-26 12:55:22.498550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:04.901 [2024-11-26 12:55:22.524272] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:05.161 00:12:05.161 real 0m18.312s 00:12:05.161 user 0m24.287s 00:12:05.161 sys 0m1.997s 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.161 ************************************ 00:12:05.161 END TEST raid_rebuild_test_sb_io 00:12:05.161 ************************************ 00:12:05.161 12:55:22 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:05.161 12:55:22 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:05.161 12:55:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:05.161 12:55:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.161 12:55:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:05.161 ************************************ 00:12:05.161 START TEST raid_rebuild_test 00:12:05.161 ************************************ 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:05.161 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88380 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88380 00:12:05.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88380 ']' 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.422 12:55:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.422 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:05.422 Zero copy mechanism will not be used. 00:12:05.422 [2024-11-26 12:55:22.924913] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:05.422 [2024-11-26 12:55:22.925042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88380 ] 00:12:05.422 [2024-11-26 12:55:23.085874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.682 [2024-11-26 12:55:23.131156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.682 [2024-11-26 12:55:23.173573] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.682 [2024-11-26 12:55:23.173695] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.252 BaseBdev1_malloc 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.252 [2024-11-26 12:55:23.740052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:06.252 [2024-11-26 12:55:23.740119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.252 [2024-11-26 12:55:23.740145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:06.252 [2024-11-26 12:55:23.740166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.252 [2024-11-26 12:55:23.742250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.252 [2024-11-26 12:55:23.742283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:06.252 BaseBdev1 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.252 BaseBdev2_malloc 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.252 [2024-11-26 12:55:23.779834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:06.252 [2024-11-26 12:55:23.779937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.252 [2024-11-26 12:55:23.779983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:06.252 [2024-11-26 12:55:23.780004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.252 [2024-11-26 12:55:23.784804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.252 [2024-11-26 12:55:23.784875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:06.252 BaseBdev2 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.252 BaseBdev3_malloc 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.252 [2024-11-26 12:55:23.811041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:06.252 [2024-11-26 12:55:23.811087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.252 [2024-11-26 12:55:23.811126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:06.252 [2024-11-26 12:55:23.811134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.252 [2024-11-26 12:55:23.813212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.252 [2024-11-26 12:55:23.813244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:06.252 BaseBdev3 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.252 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 BaseBdev4_malloc 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 [2024-11-26 12:55:23.839627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:06.253 [2024-11-26 12:55:23.839678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.253 [2024-11-26 12:55:23.839702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:06.253 [2024-11-26 12:55:23.839710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.253 [2024-11-26 12:55:23.841730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.253 [2024-11-26 12:55:23.841765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:06.253 BaseBdev4 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 spare_malloc 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 spare_delay 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 [2024-11-26 12:55:23.880147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:06.253 [2024-11-26 12:55:23.880206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.253 [2024-11-26 12:55:23.880228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:06.253 [2024-11-26 12:55:23.880237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.253 [2024-11-26 12:55:23.882261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.253 [2024-11-26 12:55:23.882345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:06.253 spare 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 [2024-11-26 12:55:23.892221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.253 [2024-11-26 12:55:23.894012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.253 [2024-11-26 12:55:23.894076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.253 [2024-11-26 12:55:23.894114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:06.253 [2024-11-26 12:55:23.894200] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:06.253 [2024-11-26 12:55:23.894230] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:06.253 [2024-11-26 12:55:23.894450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:06.253 [2024-11-26 12:55:23.894605] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:06.253 [2024-11-26 12:55:23.894618] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:06.253 [2024-11-26 12:55:23.894728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.513 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.513 "name": "raid_bdev1", 00:12:06.513 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:06.513 "strip_size_kb": 0, 00:12:06.513 "state": "online", 00:12:06.513 "raid_level": "raid1", 00:12:06.513 "superblock": false, 00:12:06.513 "num_base_bdevs": 4, 00:12:06.513 "num_base_bdevs_discovered": 4, 00:12:06.513 "num_base_bdevs_operational": 4, 00:12:06.513 "base_bdevs_list": [ 00:12:06.513 { 00:12:06.513 "name": "BaseBdev1", 00:12:06.513 "uuid": "5326a28e-b7b1-5e94-90e1-b813815951b6", 00:12:06.513 "is_configured": true, 00:12:06.513 "data_offset": 0, 00:12:06.513 "data_size": 65536 00:12:06.513 }, 00:12:06.513 { 00:12:06.513 "name": "BaseBdev2", 00:12:06.513 "uuid": "efb0bba7-a19c-5f43-91bc-08593f8a7b40", 00:12:06.513 "is_configured": true, 00:12:06.513 "data_offset": 0, 00:12:06.513 "data_size": 65536 00:12:06.513 }, 00:12:06.513 { 00:12:06.513 "name": "BaseBdev3", 00:12:06.513 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:06.513 "is_configured": true, 00:12:06.513 "data_offset": 0, 00:12:06.513 "data_size": 65536 00:12:06.513 }, 00:12:06.513 { 00:12:06.513 "name": "BaseBdev4", 00:12:06.513 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:06.513 "is_configured": true, 00:12:06.513 "data_offset": 0, 00:12:06.513 "data_size": 65536 00:12:06.513 } 00:12:06.513 ] 00:12:06.513 }' 00:12:06.513 12:55:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.513 12:55:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.772 [2024-11-26 12:55:24.331742] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.772 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:06.773 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:07.034 [2024-11-26 12:55:24.571066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:07.034 /dev/nbd0 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.034 1+0 records in 00:12:07.034 1+0 records out 00:12:07.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398345 s, 10.3 MB/s 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:07.034 12:55:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:13.627 65536+0 records in 00:12:13.627 65536+0 records out 00:12:13.627 33554432 bytes (34 MB, 32 MiB) copied, 5.45221 s, 6.2 MB/s 00:12:13.627 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:13.627 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.627 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:13.628 [2024-11-26 12:55:30.283593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.628 [2024-11-26 12:55:30.315603] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.628 "name": "raid_bdev1", 00:12:13.628 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:13.628 "strip_size_kb": 0, 00:12:13.628 "state": "online", 00:12:13.628 "raid_level": "raid1", 00:12:13.628 "superblock": false, 00:12:13.628 "num_base_bdevs": 4, 00:12:13.628 "num_base_bdevs_discovered": 3, 00:12:13.628 "num_base_bdevs_operational": 3, 00:12:13.628 "base_bdevs_list": [ 00:12:13.628 { 00:12:13.628 "name": null, 00:12:13.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.628 "is_configured": false, 00:12:13.628 "data_offset": 0, 00:12:13.628 "data_size": 65536 00:12:13.628 }, 00:12:13.628 { 00:12:13.628 "name": "BaseBdev2", 00:12:13.628 "uuid": "efb0bba7-a19c-5f43-91bc-08593f8a7b40", 00:12:13.628 "is_configured": true, 00:12:13.628 "data_offset": 0, 00:12:13.628 "data_size": 65536 00:12:13.628 }, 00:12:13.628 { 00:12:13.628 "name": "BaseBdev3", 00:12:13.628 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:13.628 "is_configured": true, 00:12:13.628 "data_offset": 0, 00:12:13.628 "data_size": 65536 00:12:13.628 }, 00:12:13.628 { 00:12:13.628 "name": "BaseBdev4", 00:12:13.628 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:13.628 "is_configured": true, 00:12:13.628 "data_offset": 0, 00:12:13.628 "data_size": 65536 00:12:13.628 } 00:12:13.628 ] 00:12:13.628 }' 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.628 [2024-11-26 12:55:30.746930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:13.628 [2024-11-26 12:55:30.750301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:13.628 [2024-11-26 12:55:30.752165] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.628 12:55:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.197 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.197 "name": "raid_bdev1", 00:12:14.197 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:14.197 "strip_size_kb": 0, 00:12:14.197 "state": "online", 00:12:14.197 "raid_level": "raid1", 00:12:14.198 "superblock": false, 00:12:14.198 "num_base_bdevs": 4, 00:12:14.198 "num_base_bdevs_discovered": 4, 00:12:14.198 "num_base_bdevs_operational": 4, 00:12:14.198 "process": { 00:12:14.198 "type": "rebuild", 00:12:14.198 "target": "spare", 00:12:14.198 "progress": { 00:12:14.198 "blocks": 20480, 00:12:14.198 "percent": 31 00:12:14.198 } 00:12:14.198 }, 00:12:14.198 "base_bdevs_list": [ 00:12:14.198 { 00:12:14.198 "name": "spare", 00:12:14.198 "uuid": "5826f130-dcc1-5171-a1cd-6b3b34229e06", 00:12:14.198 "is_configured": true, 00:12:14.198 "data_offset": 0, 00:12:14.198 "data_size": 65536 00:12:14.198 }, 00:12:14.198 { 00:12:14.198 "name": "BaseBdev2", 00:12:14.198 "uuid": "efb0bba7-a19c-5f43-91bc-08593f8a7b40", 00:12:14.198 "is_configured": true, 00:12:14.198 "data_offset": 0, 00:12:14.198 "data_size": 65536 00:12:14.198 }, 00:12:14.198 { 00:12:14.198 "name": "BaseBdev3", 00:12:14.198 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:14.198 "is_configured": true, 00:12:14.198 "data_offset": 0, 00:12:14.198 "data_size": 65536 00:12:14.198 }, 00:12:14.198 { 00:12:14.198 "name": "BaseBdev4", 00:12:14.198 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:14.198 "is_configured": true, 00:12:14.198 "data_offset": 0, 00:12:14.198 "data_size": 65536 00:12:14.198 } 00:12:14.198 ] 00:12:14.198 }' 00:12:14.198 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.198 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.198 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.457 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.457 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:14.457 12:55:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.457 12:55:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.457 [2024-11-26 12:55:31.919446] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.457 [2024-11-26 12:55:31.956663] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:14.457 [2024-11-26 12:55:31.956792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.457 [2024-11-26 12:55:31.956834] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.457 [2024-11-26 12:55:31.956855] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:14.457 12:55:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.457 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:14.457 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.457 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.457 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.458 12:55:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.458 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.458 "name": "raid_bdev1", 00:12:14.458 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:14.458 "strip_size_kb": 0, 00:12:14.458 "state": "online", 00:12:14.458 "raid_level": "raid1", 00:12:14.458 "superblock": false, 00:12:14.458 "num_base_bdevs": 4, 00:12:14.458 "num_base_bdevs_discovered": 3, 00:12:14.458 "num_base_bdevs_operational": 3, 00:12:14.458 "base_bdevs_list": [ 00:12:14.458 { 00:12:14.458 "name": null, 00:12:14.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.458 "is_configured": false, 00:12:14.458 "data_offset": 0, 00:12:14.458 "data_size": 65536 00:12:14.458 }, 00:12:14.458 { 00:12:14.458 "name": "BaseBdev2", 00:12:14.458 "uuid": "efb0bba7-a19c-5f43-91bc-08593f8a7b40", 00:12:14.458 "is_configured": true, 00:12:14.458 "data_offset": 0, 00:12:14.458 "data_size": 65536 00:12:14.458 }, 00:12:14.458 { 00:12:14.458 "name": "BaseBdev3", 00:12:14.458 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:14.458 "is_configured": true, 00:12:14.458 "data_offset": 0, 00:12:14.458 "data_size": 65536 00:12:14.458 }, 00:12:14.458 { 00:12:14.458 "name": "BaseBdev4", 00:12:14.458 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:14.458 "is_configured": true, 00:12:14.458 "data_offset": 0, 00:12:14.458 "data_size": 65536 00:12:14.458 } 00:12:14.458 ] 00:12:14.458 }' 00:12:14.458 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.458 12:55:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.028 "name": "raid_bdev1", 00:12:15.028 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:15.028 "strip_size_kb": 0, 00:12:15.028 "state": "online", 00:12:15.028 "raid_level": "raid1", 00:12:15.028 "superblock": false, 00:12:15.028 "num_base_bdevs": 4, 00:12:15.028 "num_base_bdevs_discovered": 3, 00:12:15.028 "num_base_bdevs_operational": 3, 00:12:15.028 "base_bdevs_list": [ 00:12:15.028 { 00:12:15.028 "name": null, 00:12:15.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.028 "is_configured": false, 00:12:15.028 "data_offset": 0, 00:12:15.028 "data_size": 65536 00:12:15.028 }, 00:12:15.028 { 00:12:15.028 "name": "BaseBdev2", 00:12:15.028 "uuid": "efb0bba7-a19c-5f43-91bc-08593f8a7b40", 00:12:15.028 "is_configured": true, 00:12:15.028 "data_offset": 0, 00:12:15.028 "data_size": 65536 00:12:15.028 }, 00:12:15.028 { 00:12:15.028 "name": "BaseBdev3", 00:12:15.028 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:15.028 "is_configured": true, 00:12:15.028 "data_offset": 0, 00:12:15.028 "data_size": 65536 00:12:15.028 }, 00:12:15.028 { 00:12:15.028 "name": "BaseBdev4", 00:12:15.028 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:15.028 "is_configured": true, 00:12:15.028 "data_offset": 0, 00:12:15.028 "data_size": 65536 00:12:15.028 } 00:12:15.028 ] 00:12:15.028 }' 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.028 [2024-11-26 12:55:32.559804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:15.028 [2024-11-26 12:55:32.563051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:15.028 [2024-11-26 12:55:32.565008] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.028 12:55:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.968 "name": "raid_bdev1", 00:12:15.968 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:15.968 "strip_size_kb": 0, 00:12:15.968 "state": "online", 00:12:15.968 "raid_level": "raid1", 00:12:15.968 "superblock": false, 00:12:15.968 "num_base_bdevs": 4, 00:12:15.968 "num_base_bdevs_discovered": 4, 00:12:15.968 "num_base_bdevs_operational": 4, 00:12:15.968 "process": { 00:12:15.968 "type": "rebuild", 00:12:15.968 "target": "spare", 00:12:15.968 "progress": { 00:12:15.968 "blocks": 20480, 00:12:15.968 "percent": 31 00:12:15.968 } 00:12:15.968 }, 00:12:15.968 "base_bdevs_list": [ 00:12:15.968 { 00:12:15.968 "name": "spare", 00:12:15.968 "uuid": "5826f130-dcc1-5171-a1cd-6b3b34229e06", 00:12:15.968 "is_configured": true, 00:12:15.968 "data_offset": 0, 00:12:15.968 "data_size": 65536 00:12:15.968 }, 00:12:15.968 { 00:12:15.968 "name": "BaseBdev2", 00:12:15.968 "uuid": "efb0bba7-a19c-5f43-91bc-08593f8a7b40", 00:12:15.968 "is_configured": true, 00:12:15.968 "data_offset": 0, 00:12:15.968 "data_size": 65536 00:12:15.968 }, 00:12:15.968 { 00:12:15.968 "name": "BaseBdev3", 00:12:15.968 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:15.968 "is_configured": true, 00:12:15.968 "data_offset": 0, 00:12:15.968 "data_size": 65536 00:12:15.968 }, 00:12:15.968 { 00:12:15.968 "name": "BaseBdev4", 00:12:15.968 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:15.968 "is_configured": true, 00:12:15.968 "data_offset": 0, 00:12:15.968 "data_size": 65536 00:12:15.968 } 00:12:15.968 ] 00:12:15.968 }' 00:12:15.968 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.228 [2024-11-26 12:55:33.723714] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.228 [2024-11-26 12:55:33.768936] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.228 "name": "raid_bdev1", 00:12:16.228 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:16.228 "strip_size_kb": 0, 00:12:16.228 "state": "online", 00:12:16.228 "raid_level": "raid1", 00:12:16.228 "superblock": false, 00:12:16.228 "num_base_bdevs": 4, 00:12:16.228 "num_base_bdevs_discovered": 3, 00:12:16.228 "num_base_bdevs_operational": 3, 00:12:16.228 "process": { 00:12:16.228 "type": "rebuild", 00:12:16.228 "target": "spare", 00:12:16.228 "progress": { 00:12:16.228 "blocks": 24576, 00:12:16.228 "percent": 37 00:12:16.228 } 00:12:16.228 }, 00:12:16.228 "base_bdevs_list": [ 00:12:16.228 { 00:12:16.228 "name": "spare", 00:12:16.228 "uuid": "5826f130-dcc1-5171-a1cd-6b3b34229e06", 00:12:16.228 "is_configured": true, 00:12:16.228 "data_offset": 0, 00:12:16.228 "data_size": 65536 00:12:16.228 }, 00:12:16.228 { 00:12:16.228 "name": null, 00:12:16.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.228 "is_configured": false, 00:12:16.228 "data_offset": 0, 00:12:16.228 "data_size": 65536 00:12:16.228 }, 00:12:16.228 { 00:12:16.228 "name": "BaseBdev3", 00:12:16.228 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:16.228 "is_configured": true, 00:12:16.228 "data_offset": 0, 00:12:16.228 "data_size": 65536 00:12:16.228 }, 00:12:16.228 { 00:12:16.228 "name": "BaseBdev4", 00:12:16.228 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:16.228 "is_configured": true, 00:12:16.228 "data_offset": 0, 00:12:16.228 "data_size": 65536 00:12:16.228 } 00:12:16.228 ] 00:12:16.228 }' 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.228 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=357 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.488 "name": "raid_bdev1", 00:12:16.488 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:16.488 "strip_size_kb": 0, 00:12:16.488 "state": "online", 00:12:16.488 "raid_level": "raid1", 00:12:16.488 "superblock": false, 00:12:16.488 "num_base_bdevs": 4, 00:12:16.488 "num_base_bdevs_discovered": 3, 00:12:16.488 "num_base_bdevs_operational": 3, 00:12:16.488 "process": { 00:12:16.488 "type": "rebuild", 00:12:16.488 "target": "spare", 00:12:16.488 "progress": { 00:12:16.488 "blocks": 26624, 00:12:16.488 "percent": 40 00:12:16.488 } 00:12:16.488 }, 00:12:16.488 "base_bdevs_list": [ 00:12:16.488 { 00:12:16.488 "name": "spare", 00:12:16.488 "uuid": "5826f130-dcc1-5171-a1cd-6b3b34229e06", 00:12:16.488 "is_configured": true, 00:12:16.488 "data_offset": 0, 00:12:16.488 "data_size": 65536 00:12:16.488 }, 00:12:16.488 { 00:12:16.488 "name": null, 00:12:16.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.488 "is_configured": false, 00:12:16.488 "data_offset": 0, 00:12:16.488 "data_size": 65536 00:12:16.488 }, 00:12:16.488 { 00:12:16.488 "name": "BaseBdev3", 00:12:16.488 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:16.488 "is_configured": true, 00:12:16.488 "data_offset": 0, 00:12:16.488 "data_size": 65536 00:12:16.488 }, 00:12:16.488 { 00:12:16.488 "name": "BaseBdev4", 00:12:16.488 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:16.488 "is_configured": true, 00:12:16.488 "data_offset": 0, 00:12:16.488 "data_size": 65536 00:12:16.488 } 00:12:16.488 ] 00:12:16.488 }' 00:12:16.488 12:55:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.488 12:55:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.488 12:55:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.488 12:55:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.488 12:55:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:17.427 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:17.427 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.427 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.427 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.427 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.427 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.427 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.427 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.427 12:55:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.427 12:55:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.687 12:55:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.687 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.687 "name": "raid_bdev1", 00:12:17.687 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:17.687 "strip_size_kb": 0, 00:12:17.687 "state": "online", 00:12:17.687 "raid_level": "raid1", 00:12:17.687 "superblock": false, 00:12:17.687 "num_base_bdevs": 4, 00:12:17.687 "num_base_bdevs_discovered": 3, 00:12:17.687 "num_base_bdevs_operational": 3, 00:12:17.687 "process": { 00:12:17.687 "type": "rebuild", 00:12:17.687 "target": "spare", 00:12:17.687 "progress": { 00:12:17.687 "blocks": 51200, 00:12:17.687 "percent": 78 00:12:17.687 } 00:12:17.687 }, 00:12:17.687 "base_bdevs_list": [ 00:12:17.687 { 00:12:17.687 "name": "spare", 00:12:17.687 "uuid": "5826f130-dcc1-5171-a1cd-6b3b34229e06", 00:12:17.687 "is_configured": true, 00:12:17.687 "data_offset": 0, 00:12:17.687 "data_size": 65536 00:12:17.687 }, 00:12:17.687 { 00:12:17.687 "name": null, 00:12:17.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.687 "is_configured": false, 00:12:17.687 "data_offset": 0, 00:12:17.687 "data_size": 65536 00:12:17.687 }, 00:12:17.687 { 00:12:17.687 "name": "BaseBdev3", 00:12:17.687 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:17.687 "is_configured": true, 00:12:17.687 "data_offset": 0, 00:12:17.687 "data_size": 65536 00:12:17.687 }, 00:12:17.687 { 00:12:17.687 "name": "BaseBdev4", 00:12:17.687 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:17.687 "is_configured": true, 00:12:17.687 "data_offset": 0, 00:12:17.687 "data_size": 65536 00:12:17.687 } 00:12:17.687 ] 00:12:17.687 }' 00:12:17.687 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.687 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.687 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.687 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.687 12:55:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:18.256 [2024-11-26 12:55:35.775685] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:18.256 [2024-11-26 12:55:35.775815] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:18.256 [2024-11-26 12:55:35.775863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.825 "name": "raid_bdev1", 00:12:18.825 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:18.825 "strip_size_kb": 0, 00:12:18.825 "state": "online", 00:12:18.825 "raid_level": "raid1", 00:12:18.825 "superblock": false, 00:12:18.825 "num_base_bdevs": 4, 00:12:18.825 "num_base_bdevs_discovered": 3, 00:12:18.825 "num_base_bdevs_operational": 3, 00:12:18.825 "base_bdevs_list": [ 00:12:18.825 { 00:12:18.825 "name": "spare", 00:12:18.825 "uuid": "5826f130-dcc1-5171-a1cd-6b3b34229e06", 00:12:18.825 "is_configured": true, 00:12:18.825 "data_offset": 0, 00:12:18.825 "data_size": 65536 00:12:18.825 }, 00:12:18.825 { 00:12:18.825 "name": null, 00:12:18.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.825 "is_configured": false, 00:12:18.825 "data_offset": 0, 00:12:18.825 "data_size": 65536 00:12:18.825 }, 00:12:18.825 { 00:12:18.825 "name": "BaseBdev3", 00:12:18.825 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:18.825 "is_configured": true, 00:12:18.825 "data_offset": 0, 00:12:18.825 "data_size": 65536 00:12:18.825 }, 00:12:18.825 { 00:12:18.825 "name": "BaseBdev4", 00:12:18.825 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:18.825 "is_configured": true, 00:12:18.825 "data_offset": 0, 00:12:18.825 "data_size": 65536 00:12:18.825 } 00:12:18.825 ] 00:12:18.825 }' 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.825 "name": "raid_bdev1", 00:12:18.825 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:18.825 "strip_size_kb": 0, 00:12:18.825 "state": "online", 00:12:18.825 "raid_level": "raid1", 00:12:18.825 "superblock": false, 00:12:18.825 "num_base_bdevs": 4, 00:12:18.825 "num_base_bdevs_discovered": 3, 00:12:18.825 "num_base_bdevs_operational": 3, 00:12:18.825 "base_bdevs_list": [ 00:12:18.825 { 00:12:18.825 "name": "spare", 00:12:18.825 "uuid": "5826f130-dcc1-5171-a1cd-6b3b34229e06", 00:12:18.825 "is_configured": true, 00:12:18.825 "data_offset": 0, 00:12:18.825 "data_size": 65536 00:12:18.825 }, 00:12:18.825 { 00:12:18.825 "name": null, 00:12:18.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.825 "is_configured": false, 00:12:18.825 "data_offset": 0, 00:12:18.825 "data_size": 65536 00:12:18.825 }, 00:12:18.825 { 00:12:18.825 "name": "BaseBdev3", 00:12:18.825 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:18.825 "is_configured": true, 00:12:18.825 "data_offset": 0, 00:12:18.825 "data_size": 65536 00:12:18.825 }, 00:12:18.825 { 00:12:18.825 "name": "BaseBdev4", 00:12:18.825 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:18.825 "is_configured": true, 00:12:18.825 "data_offset": 0, 00:12:18.825 "data_size": 65536 00:12:18.825 } 00:12:18.825 ] 00:12:18.825 }' 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.825 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.826 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.826 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.826 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.826 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.826 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.826 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.826 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.826 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.826 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.085 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.085 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.085 "name": "raid_bdev1", 00:12:19.085 "uuid": "a674d663-5c2a-4317-9835-7f879a87b100", 00:12:19.085 "strip_size_kb": 0, 00:12:19.085 "state": "online", 00:12:19.085 "raid_level": "raid1", 00:12:19.085 "superblock": false, 00:12:19.085 "num_base_bdevs": 4, 00:12:19.085 "num_base_bdevs_discovered": 3, 00:12:19.085 "num_base_bdevs_operational": 3, 00:12:19.085 "base_bdevs_list": [ 00:12:19.085 { 00:12:19.085 "name": "spare", 00:12:19.085 "uuid": "5826f130-dcc1-5171-a1cd-6b3b34229e06", 00:12:19.085 "is_configured": true, 00:12:19.085 "data_offset": 0, 00:12:19.085 "data_size": 65536 00:12:19.085 }, 00:12:19.085 { 00:12:19.085 "name": null, 00:12:19.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.085 "is_configured": false, 00:12:19.085 "data_offset": 0, 00:12:19.085 "data_size": 65536 00:12:19.085 }, 00:12:19.085 { 00:12:19.085 "name": "BaseBdev3", 00:12:19.085 "uuid": "7b606c7b-5660-5073-a6c9-7bd88bb5106e", 00:12:19.085 "is_configured": true, 00:12:19.085 "data_offset": 0, 00:12:19.085 "data_size": 65536 00:12:19.085 }, 00:12:19.085 { 00:12:19.085 "name": "BaseBdev4", 00:12:19.085 "uuid": "0939a354-72cd-5a71-b367-fa1c8ea25358", 00:12:19.085 "is_configured": true, 00:12:19.085 "data_offset": 0, 00:12:19.085 "data_size": 65536 00:12:19.085 } 00:12:19.085 ] 00:12:19.085 }' 00:12:19.085 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.085 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.344 [2024-11-26 12:55:36.933569] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:19.344 [2024-11-26 12:55:36.933641] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.344 [2024-11-26 12:55:36.933757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.344 [2024-11-26 12:55:36.933850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.344 [2024-11-26 12:55:36.933912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.344 12:55:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:19.603 /dev/nbd0 00:12:19.603 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.603 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.603 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:19.603 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:19.603 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.603 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.604 1+0 records in 00:12:19.604 1+0 records out 00:12:19.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324357 s, 12.6 MB/s 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.604 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:19.864 /dev/nbd1 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.864 1+0 records in 00:12:19.864 1+0 records out 00:12:19.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431945 s, 9.5 MB/s 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.864 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.123 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88380 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88380 ']' 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88380 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88380 00:12:20.383 killing process with pid 88380 00:12:20.383 Received shutdown signal, test time was about 60.000000 seconds 00:12:20.383 00:12:20.383 Latency(us) 00:12:20.383 [2024-11-26T12:55:38.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.383 [2024-11-26T12:55:38.067Z] =================================================================================================================== 00:12:20.383 [2024-11-26T12:55:38.067Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88380' 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88380 00:12:20.383 [2024-11-26 12:55:37.954072] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.383 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88380 00:12:20.383 [2024-11-26 12:55:38.003573] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:20.643 00:12:20.643 real 0m15.405s 00:12:20.643 user 0m17.113s 00:12:20.643 sys 0m3.021s 00:12:20.643 ************************************ 00:12:20.643 END TEST raid_rebuild_test 00:12:20.643 ************************************ 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 12:55:38 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:20.643 12:55:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:20.643 12:55:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.643 12:55:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.643 ************************************ 00:12:20.643 START TEST raid_rebuild_test_sb 00:12:20.643 ************************************ 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.643 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88804 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88804 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88804 ']' 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.909 12:55:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.909 [2024-11-26 12:55:38.407700] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:20.909 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:20.909 Zero copy mechanism will not be used. 00:12:20.909 [2024-11-26 12:55:38.407915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88804 ] 00:12:20.909 [2024-11-26 12:55:38.562923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.172 [2024-11-26 12:55:38.607930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.172 [2024-11-26 12:55:38.650757] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.172 [2024-11-26 12:55:38.650864] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.742 BaseBdev1_malloc 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.742 [2024-11-26 12:55:39.252932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:21.742 [2024-11-26 12:55:39.253018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.742 [2024-11-26 12:55:39.253044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:21.742 [2024-11-26 12:55:39.253065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.742 [2024-11-26 12:55:39.255159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.742 [2024-11-26 12:55:39.255205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.742 BaseBdev1 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.742 BaseBdev2_malloc 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.742 [2024-11-26 12:55:39.294944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:21.742 [2024-11-26 12:55:39.295039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.742 [2024-11-26 12:55:39.295080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:21.742 [2024-11-26 12:55:39.295101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.742 [2024-11-26 12:55:39.299582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.742 [2024-11-26 12:55:39.299645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.742 BaseBdev2 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.742 BaseBdev3_malloc 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.742 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.742 [2024-11-26 12:55:39.325249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:21.742 [2024-11-26 12:55:39.325347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.742 [2024-11-26 12:55:39.325374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:21.742 [2024-11-26 12:55:39.325383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.743 [2024-11-26 12:55:39.327307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.743 [2024-11-26 12:55:39.327341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:21.743 BaseBdev3 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 BaseBdev4_malloc 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 [2024-11-26 12:55:39.353567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:21.743 [2024-11-26 12:55:39.353615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.743 [2024-11-26 12:55:39.353639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:21.743 [2024-11-26 12:55:39.353647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.743 [2024-11-26 12:55:39.355601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.743 [2024-11-26 12:55:39.355683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:21.743 BaseBdev4 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 spare_malloc 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 spare_delay 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 [2024-11-26 12:55:39.393930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:21.743 [2024-11-26 12:55:39.394031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.743 [2024-11-26 12:55:39.394056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:21.743 [2024-11-26 12:55:39.394066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.743 [2024-11-26 12:55:39.396060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.743 [2024-11-26 12:55:39.396097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:21.743 spare 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 [2024-11-26 12:55:39.406003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.743 [2024-11-26 12:55:39.407773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.743 [2024-11-26 12:55:39.407839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.743 [2024-11-26 12:55:39.407881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:21.743 [2024-11-26 12:55:39.408047] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:21.743 [2024-11-26 12:55:39.408058] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.743 [2024-11-26 12:55:39.408306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:21.743 [2024-11-26 12:55:39.408447] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:21.743 [2024-11-26 12:55:39.408459] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:21.743 [2024-11-26 12:55:39.408567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.743 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.003 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.003 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.003 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.003 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.003 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.003 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.003 "name": "raid_bdev1", 00:12:22.003 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:22.003 "strip_size_kb": 0, 00:12:22.003 "state": "online", 00:12:22.003 "raid_level": "raid1", 00:12:22.003 "superblock": true, 00:12:22.003 "num_base_bdevs": 4, 00:12:22.003 "num_base_bdevs_discovered": 4, 00:12:22.003 "num_base_bdevs_operational": 4, 00:12:22.003 "base_bdevs_list": [ 00:12:22.003 { 00:12:22.003 "name": "BaseBdev1", 00:12:22.003 "uuid": "cfc28df9-1f5c-5736-b60a-6a35b737541d", 00:12:22.003 "is_configured": true, 00:12:22.003 "data_offset": 2048, 00:12:22.003 "data_size": 63488 00:12:22.003 }, 00:12:22.003 { 00:12:22.003 "name": "BaseBdev2", 00:12:22.003 "uuid": "5d2c3055-135f-5e6b-bd70-737b405cd020", 00:12:22.003 "is_configured": true, 00:12:22.003 "data_offset": 2048, 00:12:22.003 "data_size": 63488 00:12:22.003 }, 00:12:22.003 { 00:12:22.003 "name": "BaseBdev3", 00:12:22.003 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:22.003 "is_configured": true, 00:12:22.003 "data_offset": 2048, 00:12:22.003 "data_size": 63488 00:12:22.003 }, 00:12:22.003 { 00:12:22.003 "name": "BaseBdev4", 00:12:22.003 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:22.003 "is_configured": true, 00:12:22.003 "data_offset": 2048, 00:12:22.003 "data_size": 63488 00:12:22.003 } 00:12:22.003 ] 00:12:22.003 }' 00:12:22.003 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.003 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:22.263 [2024-11-26 12:55:39.817564] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:22.263 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.264 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:22.264 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.264 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:22.264 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.264 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.264 12:55:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:22.523 [2024-11-26 12:55:40.084871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:22.523 /dev/nbd0 00:12:22.523 12:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.523 12:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.523 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:22.523 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:22.523 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.524 1+0 records in 00:12:22.524 1+0 records out 00:12:22.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564449 s, 7.3 MB/s 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:22.524 12:55:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:27.801 63488+0 records in 00:12:27.801 63488+0 records out 00:12:27.801 32505856 bytes (33 MB, 31 MiB) copied, 5.26992 s, 6.2 MB/s 00:12:27.801 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:27.801 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:27.801 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:27.801 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.801 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:27.801 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.801 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:28.061 [2024-11-26 12:55:45.594562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.061 [2024-11-26 12:55:45.630561] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.061 "name": "raid_bdev1", 00:12:28.061 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:28.061 "strip_size_kb": 0, 00:12:28.061 "state": "online", 00:12:28.061 "raid_level": "raid1", 00:12:28.061 "superblock": true, 00:12:28.061 "num_base_bdevs": 4, 00:12:28.061 "num_base_bdevs_discovered": 3, 00:12:28.061 "num_base_bdevs_operational": 3, 00:12:28.061 "base_bdevs_list": [ 00:12:28.061 { 00:12:28.061 "name": null, 00:12:28.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.061 "is_configured": false, 00:12:28.061 "data_offset": 0, 00:12:28.061 "data_size": 63488 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "name": "BaseBdev2", 00:12:28.061 "uuid": "5d2c3055-135f-5e6b-bd70-737b405cd020", 00:12:28.061 "is_configured": true, 00:12:28.061 "data_offset": 2048, 00:12:28.061 "data_size": 63488 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "name": "BaseBdev3", 00:12:28.061 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:28.061 "is_configured": true, 00:12:28.061 "data_offset": 2048, 00:12:28.061 "data_size": 63488 00:12:28.061 }, 00:12:28.061 { 00:12:28.061 "name": "BaseBdev4", 00:12:28.061 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:28.061 "is_configured": true, 00:12:28.061 "data_offset": 2048, 00:12:28.061 "data_size": 63488 00:12:28.061 } 00:12:28.061 ] 00:12:28.061 }' 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.061 12:55:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.630 12:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:28.630 12:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.630 12:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.630 [2024-11-26 12:55:46.029902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.630 [2024-11-26 12:55:46.033334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:28.630 [2024-11-26 12:55:46.035349] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:28.630 12:55:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.630 12:55:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.567 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.567 "name": "raid_bdev1", 00:12:29.567 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:29.567 "strip_size_kb": 0, 00:12:29.567 "state": "online", 00:12:29.567 "raid_level": "raid1", 00:12:29.567 "superblock": true, 00:12:29.567 "num_base_bdevs": 4, 00:12:29.567 "num_base_bdevs_discovered": 4, 00:12:29.567 "num_base_bdevs_operational": 4, 00:12:29.567 "process": { 00:12:29.567 "type": "rebuild", 00:12:29.567 "target": "spare", 00:12:29.567 "progress": { 00:12:29.567 "blocks": 20480, 00:12:29.567 "percent": 32 00:12:29.567 } 00:12:29.567 }, 00:12:29.567 "base_bdevs_list": [ 00:12:29.567 { 00:12:29.567 "name": "spare", 00:12:29.567 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:29.567 "is_configured": true, 00:12:29.567 "data_offset": 2048, 00:12:29.567 "data_size": 63488 00:12:29.567 }, 00:12:29.567 { 00:12:29.567 "name": "BaseBdev2", 00:12:29.567 "uuid": "5d2c3055-135f-5e6b-bd70-737b405cd020", 00:12:29.567 "is_configured": true, 00:12:29.567 "data_offset": 2048, 00:12:29.567 "data_size": 63488 00:12:29.567 }, 00:12:29.567 { 00:12:29.567 "name": "BaseBdev3", 00:12:29.567 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:29.567 "is_configured": true, 00:12:29.567 "data_offset": 2048, 00:12:29.567 "data_size": 63488 00:12:29.567 }, 00:12:29.567 { 00:12:29.567 "name": "BaseBdev4", 00:12:29.567 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:29.567 "is_configured": true, 00:12:29.567 "data_offset": 2048, 00:12:29.567 "data_size": 63488 00:12:29.567 } 00:12:29.568 ] 00:12:29.568 }' 00:12:29.568 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.568 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.568 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.568 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.568 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:29.568 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.568 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.568 [2024-11-26 12:55:47.197881] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.568 [2024-11-26 12:55:47.239815] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:29.568 [2024-11-26 12:55:47.239871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.568 [2024-11-26 12:55:47.239889] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.568 [2024-11-26 12:55:47.239896] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.827 "name": "raid_bdev1", 00:12:29.827 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:29.827 "strip_size_kb": 0, 00:12:29.827 "state": "online", 00:12:29.827 "raid_level": "raid1", 00:12:29.827 "superblock": true, 00:12:29.827 "num_base_bdevs": 4, 00:12:29.827 "num_base_bdevs_discovered": 3, 00:12:29.827 "num_base_bdevs_operational": 3, 00:12:29.827 "base_bdevs_list": [ 00:12:29.827 { 00:12:29.827 "name": null, 00:12:29.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.827 "is_configured": false, 00:12:29.827 "data_offset": 0, 00:12:29.827 "data_size": 63488 00:12:29.827 }, 00:12:29.827 { 00:12:29.827 "name": "BaseBdev2", 00:12:29.827 "uuid": "5d2c3055-135f-5e6b-bd70-737b405cd020", 00:12:29.827 "is_configured": true, 00:12:29.827 "data_offset": 2048, 00:12:29.827 "data_size": 63488 00:12:29.827 }, 00:12:29.827 { 00:12:29.827 "name": "BaseBdev3", 00:12:29.827 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:29.827 "is_configured": true, 00:12:29.827 "data_offset": 2048, 00:12:29.827 "data_size": 63488 00:12:29.827 }, 00:12:29.827 { 00:12:29.827 "name": "BaseBdev4", 00:12:29.827 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:29.827 "is_configured": true, 00:12:29.827 "data_offset": 2048, 00:12:29.827 "data_size": 63488 00:12:29.827 } 00:12:29.827 ] 00:12:29.827 }' 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.827 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.086 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.086 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.086 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.086 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.086 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.086 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.086 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.086 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.086 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.086 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.345 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.345 "name": "raid_bdev1", 00:12:30.345 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:30.345 "strip_size_kb": 0, 00:12:30.345 "state": "online", 00:12:30.345 "raid_level": "raid1", 00:12:30.345 "superblock": true, 00:12:30.345 "num_base_bdevs": 4, 00:12:30.345 "num_base_bdevs_discovered": 3, 00:12:30.345 "num_base_bdevs_operational": 3, 00:12:30.345 "base_bdevs_list": [ 00:12:30.345 { 00:12:30.345 "name": null, 00:12:30.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.345 "is_configured": false, 00:12:30.345 "data_offset": 0, 00:12:30.345 "data_size": 63488 00:12:30.345 }, 00:12:30.345 { 00:12:30.345 "name": "BaseBdev2", 00:12:30.345 "uuid": "5d2c3055-135f-5e6b-bd70-737b405cd020", 00:12:30.345 "is_configured": true, 00:12:30.345 "data_offset": 2048, 00:12:30.345 "data_size": 63488 00:12:30.345 }, 00:12:30.345 { 00:12:30.345 "name": "BaseBdev3", 00:12:30.345 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:30.345 "is_configured": true, 00:12:30.345 "data_offset": 2048, 00:12:30.345 "data_size": 63488 00:12:30.345 }, 00:12:30.345 { 00:12:30.345 "name": "BaseBdev4", 00:12:30.345 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:30.345 "is_configured": true, 00:12:30.345 "data_offset": 2048, 00:12:30.345 "data_size": 63488 00:12:30.345 } 00:12:30.345 ] 00:12:30.345 }' 00:12:30.345 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.345 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.345 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.345 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.345 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.345 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.345 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.345 [2024-11-26 12:55:47.858712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.345 [2024-11-26 12:55:47.861962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:30.345 [2024-11-26 12:55:47.863825] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:30.345 12:55:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.345 12:55:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.282 "name": "raid_bdev1", 00:12:31.282 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:31.282 "strip_size_kb": 0, 00:12:31.282 "state": "online", 00:12:31.282 "raid_level": "raid1", 00:12:31.282 "superblock": true, 00:12:31.282 "num_base_bdevs": 4, 00:12:31.282 "num_base_bdevs_discovered": 4, 00:12:31.282 "num_base_bdevs_operational": 4, 00:12:31.282 "process": { 00:12:31.282 "type": "rebuild", 00:12:31.282 "target": "spare", 00:12:31.282 "progress": { 00:12:31.282 "blocks": 20480, 00:12:31.282 "percent": 32 00:12:31.282 } 00:12:31.282 }, 00:12:31.282 "base_bdevs_list": [ 00:12:31.282 { 00:12:31.282 "name": "spare", 00:12:31.282 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:31.282 "is_configured": true, 00:12:31.282 "data_offset": 2048, 00:12:31.282 "data_size": 63488 00:12:31.282 }, 00:12:31.282 { 00:12:31.282 "name": "BaseBdev2", 00:12:31.282 "uuid": "5d2c3055-135f-5e6b-bd70-737b405cd020", 00:12:31.282 "is_configured": true, 00:12:31.282 "data_offset": 2048, 00:12:31.282 "data_size": 63488 00:12:31.282 }, 00:12:31.282 { 00:12:31.282 "name": "BaseBdev3", 00:12:31.282 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:31.282 "is_configured": true, 00:12:31.282 "data_offset": 2048, 00:12:31.282 "data_size": 63488 00:12:31.282 }, 00:12:31.282 { 00:12:31.282 "name": "BaseBdev4", 00:12:31.282 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:31.282 "is_configured": true, 00:12:31.282 "data_offset": 2048, 00:12:31.282 "data_size": 63488 00:12:31.282 } 00:12:31.282 ] 00:12:31.282 }' 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.282 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.542 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.542 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:31.542 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:31.542 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:31.542 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:31.542 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:31.542 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:31.542 12:55:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:31.542 12:55:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.542 12:55:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.542 [2024-11-26 12:55:49.007342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:31.542 [2024-11-26 12:55:49.167551] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.542 12:55:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.800 "name": "raid_bdev1", 00:12:31.800 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:31.800 "strip_size_kb": 0, 00:12:31.800 "state": "online", 00:12:31.800 "raid_level": "raid1", 00:12:31.800 "superblock": true, 00:12:31.800 "num_base_bdevs": 4, 00:12:31.800 "num_base_bdevs_discovered": 3, 00:12:31.800 "num_base_bdevs_operational": 3, 00:12:31.800 "process": { 00:12:31.800 "type": "rebuild", 00:12:31.800 "target": "spare", 00:12:31.800 "progress": { 00:12:31.800 "blocks": 24576, 00:12:31.800 "percent": 38 00:12:31.800 } 00:12:31.800 }, 00:12:31.800 "base_bdevs_list": [ 00:12:31.800 { 00:12:31.800 "name": "spare", 00:12:31.800 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:31.800 "is_configured": true, 00:12:31.800 "data_offset": 2048, 00:12:31.800 "data_size": 63488 00:12:31.800 }, 00:12:31.800 { 00:12:31.800 "name": null, 00:12:31.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.800 "is_configured": false, 00:12:31.800 "data_offset": 0, 00:12:31.800 "data_size": 63488 00:12:31.800 }, 00:12:31.800 { 00:12:31.800 "name": "BaseBdev3", 00:12:31.800 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:31.800 "is_configured": true, 00:12:31.800 "data_offset": 2048, 00:12:31.800 "data_size": 63488 00:12:31.800 }, 00:12:31.800 { 00:12:31.800 "name": "BaseBdev4", 00:12:31.800 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:31.800 "is_configured": true, 00:12:31.800 "data_offset": 2048, 00:12:31.800 "data_size": 63488 00:12:31.800 } 00:12:31.800 ] 00:12:31.800 }' 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=373 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.800 "name": "raid_bdev1", 00:12:31.800 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:31.800 "strip_size_kb": 0, 00:12:31.800 "state": "online", 00:12:31.800 "raid_level": "raid1", 00:12:31.800 "superblock": true, 00:12:31.800 "num_base_bdevs": 4, 00:12:31.800 "num_base_bdevs_discovered": 3, 00:12:31.800 "num_base_bdevs_operational": 3, 00:12:31.800 "process": { 00:12:31.800 "type": "rebuild", 00:12:31.800 "target": "spare", 00:12:31.800 "progress": { 00:12:31.800 "blocks": 26624, 00:12:31.800 "percent": 41 00:12:31.800 } 00:12:31.800 }, 00:12:31.800 "base_bdevs_list": [ 00:12:31.800 { 00:12:31.800 "name": "spare", 00:12:31.800 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:31.800 "is_configured": true, 00:12:31.800 "data_offset": 2048, 00:12:31.800 "data_size": 63488 00:12:31.800 }, 00:12:31.800 { 00:12:31.800 "name": null, 00:12:31.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.800 "is_configured": false, 00:12:31.800 "data_offset": 0, 00:12:31.800 "data_size": 63488 00:12:31.800 }, 00:12:31.800 { 00:12:31.800 "name": "BaseBdev3", 00:12:31.800 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:31.800 "is_configured": true, 00:12:31.800 "data_offset": 2048, 00:12:31.800 "data_size": 63488 00:12:31.800 }, 00:12:31.800 { 00:12:31.800 "name": "BaseBdev4", 00:12:31.800 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:31.800 "is_configured": true, 00:12:31.800 "data_offset": 2048, 00:12:31.800 "data_size": 63488 00:12:31.800 } 00:12:31.800 ] 00:12:31.800 }' 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.800 12:55:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.177 "name": "raid_bdev1", 00:12:33.177 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:33.177 "strip_size_kb": 0, 00:12:33.177 "state": "online", 00:12:33.177 "raid_level": "raid1", 00:12:33.177 "superblock": true, 00:12:33.177 "num_base_bdevs": 4, 00:12:33.177 "num_base_bdevs_discovered": 3, 00:12:33.177 "num_base_bdevs_operational": 3, 00:12:33.177 "process": { 00:12:33.177 "type": "rebuild", 00:12:33.177 "target": "spare", 00:12:33.177 "progress": { 00:12:33.177 "blocks": 51200, 00:12:33.177 "percent": 80 00:12:33.177 } 00:12:33.177 }, 00:12:33.177 "base_bdevs_list": [ 00:12:33.177 { 00:12:33.177 "name": "spare", 00:12:33.177 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:33.177 "is_configured": true, 00:12:33.177 "data_offset": 2048, 00:12:33.177 "data_size": 63488 00:12:33.177 }, 00:12:33.177 { 00:12:33.177 "name": null, 00:12:33.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.177 "is_configured": false, 00:12:33.177 "data_offset": 0, 00:12:33.177 "data_size": 63488 00:12:33.177 }, 00:12:33.177 { 00:12:33.177 "name": "BaseBdev3", 00:12:33.177 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:33.177 "is_configured": true, 00:12:33.177 "data_offset": 2048, 00:12:33.177 "data_size": 63488 00:12:33.177 }, 00:12:33.177 { 00:12:33.177 "name": "BaseBdev4", 00:12:33.177 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:33.177 "is_configured": true, 00:12:33.177 "data_offset": 2048, 00:12:33.177 "data_size": 63488 00:12:33.177 } 00:12:33.177 ] 00:12:33.177 }' 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.177 12:55:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:33.437 [2024-11-26 12:55:51.073871] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:33.437 [2024-11-26 12:55:51.073941] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:33.437 [2024-11-26 12:55:51.074035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.003 12:55:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.261 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.261 "name": "raid_bdev1", 00:12:34.261 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:34.261 "strip_size_kb": 0, 00:12:34.261 "state": "online", 00:12:34.261 "raid_level": "raid1", 00:12:34.261 "superblock": true, 00:12:34.261 "num_base_bdevs": 4, 00:12:34.261 "num_base_bdevs_discovered": 3, 00:12:34.261 "num_base_bdevs_operational": 3, 00:12:34.261 "base_bdevs_list": [ 00:12:34.261 { 00:12:34.261 "name": "spare", 00:12:34.261 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:34.261 "is_configured": true, 00:12:34.261 "data_offset": 2048, 00:12:34.261 "data_size": 63488 00:12:34.261 }, 00:12:34.261 { 00:12:34.261 "name": null, 00:12:34.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.261 "is_configured": false, 00:12:34.261 "data_offset": 0, 00:12:34.261 "data_size": 63488 00:12:34.261 }, 00:12:34.261 { 00:12:34.261 "name": "BaseBdev3", 00:12:34.261 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:34.261 "is_configured": true, 00:12:34.261 "data_offset": 2048, 00:12:34.261 "data_size": 63488 00:12:34.261 }, 00:12:34.261 { 00:12:34.261 "name": "BaseBdev4", 00:12:34.261 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:34.261 "is_configured": true, 00:12:34.261 "data_offset": 2048, 00:12:34.261 "data_size": 63488 00:12:34.261 } 00:12:34.261 ] 00:12:34.261 }' 00:12:34.261 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.261 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:34.261 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.261 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:34.261 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.262 "name": "raid_bdev1", 00:12:34.262 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:34.262 "strip_size_kb": 0, 00:12:34.262 "state": "online", 00:12:34.262 "raid_level": "raid1", 00:12:34.262 "superblock": true, 00:12:34.262 "num_base_bdevs": 4, 00:12:34.262 "num_base_bdevs_discovered": 3, 00:12:34.262 "num_base_bdevs_operational": 3, 00:12:34.262 "base_bdevs_list": [ 00:12:34.262 { 00:12:34.262 "name": "spare", 00:12:34.262 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:34.262 "is_configured": true, 00:12:34.262 "data_offset": 2048, 00:12:34.262 "data_size": 63488 00:12:34.262 }, 00:12:34.262 { 00:12:34.262 "name": null, 00:12:34.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.262 "is_configured": false, 00:12:34.262 "data_offset": 0, 00:12:34.262 "data_size": 63488 00:12:34.262 }, 00:12:34.262 { 00:12:34.262 "name": "BaseBdev3", 00:12:34.262 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:34.262 "is_configured": true, 00:12:34.262 "data_offset": 2048, 00:12:34.262 "data_size": 63488 00:12:34.262 }, 00:12:34.262 { 00:12:34.262 "name": "BaseBdev4", 00:12:34.262 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:34.262 "is_configured": true, 00:12:34.262 "data_offset": 2048, 00:12:34.262 "data_size": 63488 00:12:34.262 } 00:12:34.262 ] 00:12:34.262 }' 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.262 12:55:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.520 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.520 "name": "raid_bdev1", 00:12:34.520 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:34.520 "strip_size_kb": 0, 00:12:34.520 "state": "online", 00:12:34.520 "raid_level": "raid1", 00:12:34.520 "superblock": true, 00:12:34.520 "num_base_bdevs": 4, 00:12:34.520 "num_base_bdevs_discovered": 3, 00:12:34.520 "num_base_bdevs_operational": 3, 00:12:34.520 "base_bdevs_list": [ 00:12:34.520 { 00:12:34.520 "name": "spare", 00:12:34.520 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:34.520 "is_configured": true, 00:12:34.520 "data_offset": 2048, 00:12:34.520 "data_size": 63488 00:12:34.520 }, 00:12:34.520 { 00:12:34.520 "name": null, 00:12:34.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.521 "is_configured": false, 00:12:34.521 "data_offset": 0, 00:12:34.521 "data_size": 63488 00:12:34.521 }, 00:12:34.521 { 00:12:34.521 "name": "BaseBdev3", 00:12:34.521 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:34.521 "is_configured": true, 00:12:34.521 "data_offset": 2048, 00:12:34.521 "data_size": 63488 00:12:34.521 }, 00:12:34.521 { 00:12:34.521 "name": "BaseBdev4", 00:12:34.521 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:34.521 "is_configured": true, 00:12:34.521 "data_offset": 2048, 00:12:34.521 "data_size": 63488 00:12:34.521 } 00:12:34.521 ] 00:12:34.521 }' 00:12:34.521 12:55:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.521 12:55:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.779 [2024-11-26 12:55:52.399608] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.779 [2024-11-26 12:55:52.399690] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.779 [2024-11-26 12:55:52.399795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.779 [2024-11-26 12:55:52.399896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.779 [2024-11-26 12:55:52.399946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:34.779 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.780 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:34.780 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.780 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.780 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.780 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:34.780 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.780 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.780 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:35.039 /dev/nbd0 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.039 1+0 records in 00:12:35.039 1+0 records out 00:12:35.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284754 s, 14.4 MB/s 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.039 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:35.299 /dev/nbd1 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.299 1+0 records in 00:12:35.299 1+0 records out 00:12:35.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253709 s, 16.1 MB/s 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.299 12:55:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.558 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.818 [2024-11-26 12:55:53.420725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:35.818 [2024-11-26 12:55:53.420828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.818 [2024-11-26 12:55:53.420854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:35.818 [2024-11-26 12:55:53.420867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.818 [2024-11-26 12:55:53.422967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.818 [2024-11-26 12:55:53.423009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:35.818 [2024-11-26 12:55:53.423092] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:35.818 [2024-11-26 12:55:53.423143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.818 [2024-11-26 12:55:53.423268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.818 [2024-11-26 12:55:53.423377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.818 spare 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.818 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 [2024-11-26 12:55:53.523262] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:36.078 [2024-11-26 12:55:53.523298] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.078 [2024-11-26 12:55:53.523630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:36.078 [2024-11-26 12:55:53.523806] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:36.078 [2024-11-26 12:55:53.523818] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:36.078 [2024-11-26 12:55:53.523958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.078 "name": "raid_bdev1", 00:12:36.078 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:36.078 "strip_size_kb": 0, 00:12:36.078 "state": "online", 00:12:36.078 "raid_level": "raid1", 00:12:36.078 "superblock": true, 00:12:36.078 "num_base_bdevs": 4, 00:12:36.078 "num_base_bdevs_discovered": 3, 00:12:36.078 "num_base_bdevs_operational": 3, 00:12:36.078 "base_bdevs_list": [ 00:12:36.078 { 00:12:36.078 "name": "spare", 00:12:36.078 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:36.078 "is_configured": true, 00:12:36.078 "data_offset": 2048, 00:12:36.078 "data_size": 63488 00:12:36.078 }, 00:12:36.078 { 00:12:36.078 "name": null, 00:12:36.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.078 "is_configured": false, 00:12:36.078 "data_offset": 2048, 00:12:36.078 "data_size": 63488 00:12:36.078 }, 00:12:36.078 { 00:12:36.078 "name": "BaseBdev3", 00:12:36.078 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:36.078 "is_configured": true, 00:12:36.078 "data_offset": 2048, 00:12:36.078 "data_size": 63488 00:12:36.078 }, 00:12:36.078 { 00:12:36.078 "name": "BaseBdev4", 00:12:36.078 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:36.078 "is_configured": true, 00:12:36.078 "data_offset": 2048, 00:12:36.078 "data_size": 63488 00:12:36.078 } 00:12:36.078 ] 00:12:36.078 }' 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.078 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.336 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.336 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.336 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.336 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.336 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.336 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.336 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.336 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.336 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.336 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.596 "name": "raid_bdev1", 00:12:36.596 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:36.596 "strip_size_kb": 0, 00:12:36.596 "state": "online", 00:12:36.596 "raid_level": "raid1", 00:12:36.596 "superblock": true, 00:12:36.596 "num_base_bdevs": 4, 00:12:36.596 "num_base_bdevs_discovered": 3, 00:12:36.596 "num_base_bdevs_operational": 3, 00:12:36.596 "base_bdevs_list": [ 00:12:36.596 { 00:12:36.596 "name": "spare", 00:12:36.596 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:36.596 "is_configured": true, 00:12:36.596 "data_offset": 2048, 00:12:36.596 "data_size": 63488 00:12:36.596 }, 00:12:36.596 { 00:12:36.596 "name": null, 00:12:36.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.596 "is_configured": false, 00:12:36.596 "data_offset": 2048, 00:12:36.596 "data_size": 63488 00:12:36.596 }, 00:12:36.596 { 00:12:36.596 "name": "BaseBdev3", 00:12:36.596 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:36.596 "is_configured": true, 00:12:36.596 "data_offset": 2048, 00:12:36.596 "data_size": 63488 00:12:36.596 }, 00:12:36.596 { 00:12:36.596 "name": "BaseBdev4", 00:12:36.596 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:36.596 "is_configured": true, 00:12:36.596 "data_offset": 2048, 00:12:36.596 "data_size": 63488 00:12:36.596 } 00:12:36.596 ] 00:12:36.596 }' 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.596 [2024-11-26 12:55:54.155569] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.596 "name": "raid_bdev1", 00:12:36.596 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:36.596 "strip_size_kb": 0, 00:12:36.596 "state": "online", 00:12:36.596 "raid_level": "raid1", 00:12:36.596 "superblock": true, 00:12:36.596 "num_base_bdevs": 4, 00:12:36.596 "num_base_bdevs_discovered": 2, 00:12:36.596 "num_base_bdevs_operational": 2, 00:12:36.596 "base_bdevs_list": [ 00:12:36.596 { 00:12:36.596 "name": null, 00:12:36.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.596 "is_configured": false, 00:12:36.596 "data_offset": 0, 00:12:36.596 "data_size": 63488 00:12:36.596 }, 00:12:36.596 { 00:12:36.596 "name": null, 00:12:36.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.596 "is_configured": false, 00:12:36.596 "data_offset": 2048, 00:12:36.596 "data_size": 63488 00:12:36.596 }, 00:12:36.596 { 00:12:36.596 "name": "BaseBdev3", 00:12:36.596 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:36.596 "is_configured": true, 00:12:36.596 "data_offset": 2048, 00:12:36.596 "data_size": 63488 00:12:36.596 }, 00:12:36.596 { 00:12:36.596 "name": "BaseBdev4", 00:12:36.596 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:36.596 "is_configured": true, 00:12:36.596 "data_offset": 2048, 00:12:36.596 "data_size": 63488 00:12:36.596 } 00:12:36.596 ] 00:12:36.596 }' 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.596 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.164 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:37.164 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.164 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.164 [2024-11-26 12:55:54.626779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.164 [2024-11-26 12:55:54.627037] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:37.164 [2024-11-26 12:55:54.627111] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:37.164 [2024-11-26 12:55:54.627190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.164 [2024-11-26 12:55:54.630410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:37.164 [2024-11-26 12:55:54.632411] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.164 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.164 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.111 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.111 "name": "raid_bdev1", 00:12:38.111 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:38.111 "strip_size_kb": 0, 00:12:38.111 "state": "online", 00:12:38.111 "raid_level": "raid1", 00:12:38.111 "superblock": true, 00:12:38.111 "num_base_bdevs": 4, 00:12:38.111 "num_base_bdevs_discovered": 3, 00:12:38.111 "num_base_bdevs_operational": 3, 00:12:38.111 "process": { 00:12:38.111 "type": "rebuild", 00:12:38.111 "target": "spare", 00:12:38.111 "progress": { 00:12:38.111 "blocks": 20480, 00:12:38.111 "percent": 32 00:12:38.111 } 00:12:38.111 }, 00:12:38.111 "base_bdevs_list": [ 00:12:38.111 { 00:12:38.111 "name": "spare", 00:12:38.112 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:38.112 "is_configured": true, 00:12:38.112 "data_offset": 2048, 00:12:38.112 "data_size": 63488 00:12:38.112 }, 00:12:38.112 { 00:12:38.112 "name": null, 00:12:38.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.112 "is_configured": false, 00:12:38.112 "data_offset": 2048, 00:12:38.112 "data_size": 63488 00:12:38.112 }, 00:12:38.112 { 00:12:38.112 "name": "BaseBdev3", 00:12:38.112 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:38.112 "is_configured": true, 00:12:38.112 "data_offset": 2048, 00:12:38.112 "data_size": 63488 00:12:38.112 }, 00:12:38.112 { 00:12:38.112 "name": "BaseBdev4", 00:12:38.112 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:38.112 "is_configured": true, 00:12:38.112 "data_offset": 2048, 00:12:38.112 "data_size": 63488 00:12:38.112 } 00:12:38.112 ] 00:12:38.112 }' 00:12:38.112 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.112 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.112 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.112 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.112 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:38.112 12:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.112 12:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.112 [2024-11-26 12:55:55.775681] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.371 [2024-11-26 12:55:55.836429] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:38.371 [2024-11-26 12:55:55.836535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.371 [2024-11-26 12:55:55.836571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.371 [2024-11-26 12:55:55.836593] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.371 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.371 "name": "raid_bdev1", 00:12:38.371 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:38.371 "strip_size_kb": 0, 00:12:38.371 "state": "online", 00:12:38.371 "raid_level": "raid1", 00:12:38.371 "superblock": true, 00:12:38.371 "num_base_bdevs": 4, 00:12:38.371 "num_base_bdevs_discovered": 2, 00:12:38.371 "num_base_bdevs_operational": 2, 00:12:38.371 "base_bdevs_list": [ 00:12:38.371 { 00:12:38.371 "name": null, 00:12:38.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.372 "is_configured": false, 00:12:38.372 "data_offset": 0, 00:12:38.372 "data_size": 63488 00:12:38.372 }, 00:12:38.372 { 00:12:38.372 "name": null, 00:12:38.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.372 "is_configured": false, 00:12:38.372 "data_offset": 2048, 00:12:38.372 "data_size": 63488 00:12:38.372 }, 00:12:38.372 { 00:12:38.372 "name": "BaseBdev3", 00:12:38.372 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:38.372 "is_configured": true, 00:12:38.372 "data_offset": 2048, 00:12:38.372 "data_size": 63488 00:12:38.372 }, 00:12:38.372 { 00:12:38.372 "name": "BaseBdev4", 00:12:38.372 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:38.372 "is_configured": true, 00:12:38.372 "data_offset": 2048, 00:12:38.372 "data_size": 63488 00:12:38.372 } 00:12:38.372 ] 00:12:38.372 }' 00:12:38.372 12:55:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.372 12:55:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.631 12:55:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:38.631 12:55:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.631 12:55:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.631 [2024-11-26 12:55:56.299593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:38.631 [2024-11-26 12:55:56.299700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.631 [2024-11-26 12:55:56.299726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:38.631 [2024-11-26 12:55:56.299738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.631 [2024-11-26 12:55:56.300200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.631 [2024-11-26 12:55:56.300222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:38.631 [2024-11-26 12:55:56.300302] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:38.631 [2024-11-26 12:55:56.300320] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:38.631 [2024-11-26 12:55:56.300330] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:38.631 [2024-11-26 12:55:56.300360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.631 spare 00:12:38.631 [2024-11-26 12:55:56.303608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:38.631 12:55:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.631 12:55:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:38.631 [2024-11-26 12:55:56.305511] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:40.009 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.009 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.009 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.009 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.009 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.009 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.009 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.009 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.009 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.010 "name": "raid_bdev1", 00:12:40.010 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:40.010 "strip_size_kb": 0, 00:12:40.010 "state": "online", 00:12:40.010 "raid_level": "raid1", 00:12:40.010 "superblock": true, 00:12:40.010 "num_base_bdevs": 4, 00:12:40.010 "num_base_bdevs_discovered": 3, 00:12:40.010 "num_base_bdevs_operational": 3, 00:12:40.010 "process": { 00:12:40.010 "type": "rebuild", 00:12:40.010 "target": "spare", 00:12:40.010 "progress": { 00:12:40.010 "blocks": 20480, 00:12:40.010 "percent": 32 00:12:40.010 } 00:12:40.010 }, 00:12:40.010 "base_bdevs_list": [ 00:12:40.010 { 00:12:40.010 "name": "spare", 00:12:40.010 "uuid": "cc5f6013-975b-5013-af77-16d287d19c4b", 00:12:40.010 "is_configured": true, 00:12:40.010 "data_offset": 2048, 00:12:40.010 "data_size": 63488 00:12:40.010 }, 00:12:40.010 { 00:12:40.010 "name": null, 00:12:40.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.010 "is_configured": false, 00:12:40.010 "data_offset": 2048, 00:12:40.010 "data_size": 63488 00:12:40.010 }, 00:12:40.010 { 00:12:40.010 "name": "BaseBdev3", 00:12:40.010 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:40.010 "is_configured": true, 00:12:40.010 "data_offset": 2048, 00:12:40.010 "data_size": 63488 00:12:40.010 }, 00:12:40.010 { 00:12:40.010 "name": "BaseBdev4", 00:12:40.010 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:40.010 "is_configured": true, 00:12:40.010 "data_offset": 2048, 00:12:40.010 "data_size": 63488 00:12:40.010 } 00:12:40.010 ] 00:12:40.010 }' 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.010 [2024-11-26 12:55:57.466335] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.010 [2024-11-26 12:55:57.509472] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.010 [2024-11-26 12:55:57.509522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.010 [2024-11-26 12:55:57.509538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.010 [2024-11-26 12:55:57.509545] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.010 "name": "raid_bdev1", 00:12:40.010 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:40.010 "strip_size_kb": 0, 00:12:40.010 "state": "online", 00:12:40.010 "raid_level": "raid1", 00:12:40.010 "superblock": true, 00:12:40.010 "num_base_bdevs": 4, 00:12:40.010 "num_base_bdevs_discovered": 2, 00:12:40.010 "num_base_bdevs_operational": 2, 00:12:40.010 "base_bdevs_list": [ 00:12:40.010 { 00:12:40.010 "name": null, 00:12:40.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.010 "is_configured": false, 00:12:40.010 "data_offset": 0, 00:12:40.010 "data_size": 63488 00:12:40.010 }, 00:12:40.010 { 00:12:40.010 "name": null, 00:12:40.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.010 "is_configured": false, 00:12:40.010 "data_offset": 2048, 00:12:40.010 "data_size": 63488 00:12:40.010 }, 00:12:40.010 { 00:12:40.010 "name": "BaseBdev3", 00:12:40.010 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:40.010 "is_configured": true, 00:12:40.010 "data_offset": 2048, 00:12:40.010 "data_size": 63488 00:12:40.010 }, 00:12:40.010 { 00:12:40.010 "name": "BaseBdev4", 00:12:40.010 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:40.010 "is_configured": true, 00:12:40.010 "data_offset": 2048, 00:12:40.010 "data_size": 63488 00:12:40.010 } 00:12:40.010 ] 00:12:40.010 }' 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.010 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.577 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.577 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.577 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.577 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.577 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.578 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.578 12:55:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.578 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.578 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.578 12:55:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.578 "name": "raid_bdev1", 00:12:40.578 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:40.578 "strip_size_kb": 0, 00:12:40.578 "state": "online", 00:12:40.578 "raid_level": "raid1", 00:12:40.578 "superblock": true, 00:12:40.578 "num_base_bdevs": 4, 00:12:40.578 "num_base_bdevs_discovered": 2, 00:12:40.578 "num_base_bdevs_operational": 2, 00:12:40.578 "base_bdevs_list": [ 00:12:40.578 { 00:12:40.578 "name": null, 00:12:40.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.578 "is_configured": false, 00:12:40.578 "data_offset": 0, 00:12:40.578 "data_size": 63488 00:12:40.578 }, 00:12:40.578 { 00:12:40.578 "name": null, 00:12:40.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.578 "is_configured": false, 00:12:40.578 "data_offset": 2048, 00:12:40.578 "data_size": 63488 00:12:40.578 }, 00:12:40.578 { 00:12:40.578 "name": "BaseBdev3", 00:12:40.578 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:40.578 "is_configured": true, 00:12:40.578 "data_offset": 2048, 00:12:40.578 "data_size": 63488 00:12:40.578 }, 00:12:40.578 { 00:12:40.578 "name": "BaseBdev4", 00:12:40.578 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:40.578 "is_configured": true, 00:12:40.578 "data_offset": 2048, 00:12:40.578 "data_size": 63488 00:12:40.578 } 00:12:40.578 ] 00:12:40.578 }' 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.578 [2024-11-26 12:55:58.120262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:40.578 [2024-11-26 12:55:58.120310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.578 [2024-11-26 12:55:58.120348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:40.578 [2024-11-26 12:55:58.120357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.578 [2024-11-26 12:55:58.120764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.578 [2024-11-26 12:55:58.120780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:40.578 [2024-11-26 12:55:58.120847] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:40.578 [2024-11-26 12:55:58.120870] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:40.578 [2024-11-26 12:55:58.120879] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:40.578 [2024-11-26 12:55:58.120888] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:40.578 BaseBdev1 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.578 12:55:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.512 "name": "raid_bdev1", 00:12:41.512 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:41.512 "strip_size_kb": 0, 00:12:41.512 "state": "online", 00:12:41.512 "raid_level": "raid1", 00:12:41.512 "superblock": true, 00:12:41.512 "num_base_bdevs": 4, 00:12:41.512 "num_base_bdevs_discovered": 2, 00:12:41.512 "num_base_bdevs_operational": 2, 00:12:41.512 "base_bdevs_list": [ 00:12:41.512 { 00:12:41.512 "name": null, 00:12:41.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.512 "is_configured": false, 00:12:41.512 "data_offset": 0, 00:12:41.512 "data_size": 63488 00:12:41.512 }, 00:12:41.512 { 00:12:41.512 "name": null, 00:12:41.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.512 "is_configured": false, 00:12:41.512 "data_offset": 2048, 00:12:41.512 "data_size": 63488 00:12:41.512 }, 00:12:41.512 { 00:12:41.512 "name": "BaseBdev3", 00:12:41.512 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:41.512 "is_configured": true, 00:12:41.512 "data_offset": 2048, 00:12:41.512 "data_size": 63488 00:12:41.512 }, 00:12:41.512 { 00:12:41.512 "name": "BaseBdev4", 00:12:41.512 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:41.512 "is_configured": true, 00:12:41.512 "data_offset": 2048, 00:12:41.512 "data_size": 63488 00:12:41.512 } 00:12:41.512 ] 00:12:41.512 }' 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.512 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.081 "name": "raid_bdev1", 00:12:42.081 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:42.081 "strip_size_kb": 0, 00:12:42.081 "state": "online", 00:12:42.081 "raid_level": "raid1", 00:12:42.081 "superblock": true, 00:12:42.081 "num_base_bdevs": 4, 00:12:42.081 "num_base_bdevs_discovered": 2, 00:12:42.081 "num_base_bdevs_operational": 2, 00:12:42.081 "base_bdevs_list": [ 00:12:42.081 { 00:12:42.081 "name": null, 00:12:42.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.081 "is_configured": false, 00:12:42.081 "data_offset": 0, 00:12:42.081 "data_size": 63488 00:12:42.081 }, 00:12:42.081 { 00:12:42.081 "name": null, 00:12:42.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.081 "is_configured": false, 00:12:42.081 "data_offset": 2048, 00:12:42.081 "data_size": 63488 00:12:42.081 }, 00:12:42.081 { 00:12:42.081 "name": "BaseBdev3", 00:12:42.081 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:42.081 "is_configured": true, 00:12:42.081 "data_offset": 2048, 00:12:42.081 "data_size": 63488 00:12:42.081 }, 00:12:42.081 { 00:12:42.081 "name": "BaseBdev4", 00:12:42.081 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:42.081 "is_configured": true, 00:12:42.081 "data_offset": 2048, 00:12:42.081 "data_size": 63488 00:12:42.081 } 00:12:42.081 ] 00:12:42.081 }' 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.081 [2024-11-26 12:55:59.725561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.081 [2024-11-26 12:55:59.725767] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:42.081 [2024-11-26 12:55:59.725836] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:42.081 request: 00:12:42.081 { 00:12:42.081 "base_bdev": "BaseBdev1", 00:12:42.081 "raid_bdev": "raid_bdev1", 00:12:42.081 "method": "bdev_raid_add_base_bdev", 00:12:42.081 "req_id": 1 00:12:42.081 } 00:12:42.081 Got JSON-RPC error response 00:12:42.081 response: 00:12:42.081 { 00:12:42.081 "code": -22, 00:12:42.081 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:42.081 } 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:42.081 12:55:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.459 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.459 "name": "raid_bdev1", 00:12:43.459 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:43.459 "strip_size_kb": 0, 00:12:43.459 "state": "online", 00:12:43.459 "raid_level": "raid1", 00:12:43.459 "superblock": true, 00:12:43.459 "num_base_bdevs": 4, 00:12:43.459 "num_base_bdevs_discovered": 2, 00:12:43.459 "num_base_bdevs_operational": 2, 00:12:43.459 "base_bdevs_list": [ 00:12:43.459 { 00:12:43.459 "name": null, 00:12:43.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.459 "is_configured": false, 00:12:43.459 "data_offset": 0, 00:12:43.459 "data_size": 63488 00:12:43.459 }, 00:12:43.459 { 00:12:43.459 "name": null, 00:12:43.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.460 "is_configured": false, 00:12:43.460 "data_offset": 2048, 00:12:43.460 "data_size": 63488 00:12:43.460 }, 00:12:43.460 { 00:12:43.460 "name": "BaseBdev3", 00:12:43.460 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:43.460 "is_configured": true, 00:12:43.460 "data_offset": 2048, 00:12:43.460 "data_size": 63488 00:12:43.460 }, 00:12:43.460 { 00:12:43.460 "name": "BaseBdev4", 00:12:43.460 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:43.460 "is_configured": true, 00:12:43.460 "data_offset": 2048, 00:12:43.460 "data_size": 63488 00:12:43.460 } 00:12:43.460 ] 00:12:43.460 }' 00:12:43.460 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.460 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.719 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.719 "name": "raid_bdev1", 00:12:43.719 "uuid": "79ec3a51-07de-4936-bd5e-61c4a81ad745", 00:12:43.719 "strip_size_kb": 0, 00:12:43.719 "state": "online", 00:12:43.719 "raid_level": "raid1", 00:12:43.719 "superblock": true, 00:12:43.719 "num_base_bdevs": 4, 00:12:43.719 "num_base_bdevs_discovered": 2, 00:12:43.719 "num_base_bdevs_operational": 2, 00:12:43.719 "base_bdevs_list": [ 00:12:43.719 { 00:12:43.719 "name": null, 00:12:43.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.719 "is_configured": false, 00:12:43.719 "data_offset": 0, 00:12:43.719 "data_size": 63488 00:12:43.719 }, 00:12:43.719 { 00:12:43.719 "name": null, 00:12:43.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.719 "is_configured": false, 00:12:43.719 "data_offset": 2048, 00:12:43.719 "data_size": 63488 00:12:43.719 }, 00:12:43.719 { 00:12:43.719 "name": "BaseBdev3", 00:12:43.719 "uuid": "c4d72f27-3f24-56a6-bad3-e22298332a5a", 00:12:43.719 "is_configured": true, 00:12:43.719 "data_offset": 2048, 00:12:43.719 "data_size": 63488 00:12:43.719 }, 00:12:43.719 { 00:12:43.719 "name": "BaseBdev4", 00:12:43.719 "uuid": "7950dd5a-a018-54d8-ab44-0cee67a9d3c2", 00:12:43.720 "is_configured": true, 00:12:43.720 "data_offset": 2048, 00:12:43.720 "data_size": 63488 00:12:43.720 } 00:12:43.720 ] 00:12:43.720 }' 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88804 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88804 ']' 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88804 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88804 00:12:43.720 killing process with pid 88804 00:12:43.720 Received shutdown signal, test time was about 60.000000 seconds 00:12:43.720 00:12:43.720 Latency(us) 00:12:43.720 [2024-11-26T12:56:01.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.720 [2024-11-26T12:56:01.404Z] =================================================================================================================== 00:12:43.720 [2024-11-26T12:56:01.404Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88804' 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88804 00:12:43.720 [2024-11-26 12:56:01.345597] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.720 [2024-11-26 12:56:01.345723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.720 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88804 00:12:43.720 [2024-11-26 12:56:01.345783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.720 [2024-11-26 12:56:01.345794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:43.720 [2024-11-26 12:56:01.396563] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.979 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:43.979 00:12:43.979 real 0m23.318s 00:12:43.979 user 0m28.562s 00:12:43.979 sys 0m3.750s 00:12:43.979 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.979 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 ************************************ 00:12:43.979 END TEST raid_rebuild_test_sb 00:12:43.979 ************************************ 00:12:44.239 12:56:01 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:44.239 12:56:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:44.239 12:56:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.239 12:56:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.239 ************************************ 00:12:44.239 START TEST raid_rebuild_test_io 00:12:44.239 ************************************ 00:12:44.239 12:56:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:12:44.239 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:44.239 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:44.239 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:44.239 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:44.239 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:44.239 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:44.239 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:44.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89544 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89544 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89544 ']' 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.240 12:56:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.240 [2024-11-26 12:56:01.821518] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:44.240 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:44.240 Zero copy mechanism will not be used. 00:12:44.240 [2024-11-26 12:56:01.821769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89544 ] 00:12:44.499 [2024-11-26 12:56:01.991742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.499 [2024-11-26 12:56:02.036106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.499 [2024-11-26 12:56:02.079398] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.499 [2024-11-26 12:56:02.079431] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.070 BaseBdev1_malloc 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.070 [2024-11-26 12:56:02.713789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:45.070 [2024-11-26 12:56:02.713847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.070 [2024-11-26 12:56:02.713889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:45.070 [2024-11-26 12:56:02.713902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.070 [2024-11-26 12:56:02.715981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.070 [2024-11-26 12:56:02.716081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:45.070 BaseBdev1 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.070 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.331 BaseBdev2_malloc 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.331 [2024-11-26 12:56:02.759112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:45.331 [2024-11-26 12:56:02.759267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.331 [2024-11-26 12:56:02.759326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:45.331 [2024-11-26 12:56:02.759354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.331 [2024-11-26 12:56:02.763872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.331 [2024-11-26 12:56:02.763936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:45.331 BaseBdev2 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.331 BaseBdev3_malloc 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.331 [2024-11-26 12:56:02.790242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:45.331 [2024-11-26 12:56:02.790289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.331 [2024-11-26 12:56:02.790312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:45.331 [2024-11-26 12:56:02.790320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.331 [2024-11-26 12:56:02.792353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.331 [2024-11-26 12:56:02.792431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:45.331 BaseBdev3 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.331 BaseBdev4_malloc 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.331 [2024-11-26 12:56:02.818995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:45.331 [2024-11-26 12:56:02.819048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.331 [2024-11-26 12:56:02.819072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:45.331 [2024-11-26 12:56:02.819079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.331 [2024-11-26 12:56:02.821215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.331 [2024-11-26 12:56:02.821247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:45.331 BaseBdev4 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.331 spare_malloc 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.331 spare_delay 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.331 [2024-11-26 12:56:02.859730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:45.331 [2024-11-26 12:56:02.859781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.331 [2024-11-26 12:56:02.859802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:45.331 [2024-11-26 12:56:02.859810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.331 [2024-11-26 12:56:02.861904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.331 [2024-11-26 12:56:02.861972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:45.331 spare 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.331 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.331 [2024-11-26 12:56:02.871786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.331 [2024-11-26 12:56:02.873580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.331 [2024-11-26 12:56:02.873642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.331 [2024-11-26 12:56:02.873680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:45.331 [2024-11-26 12:56:02.873749] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:45.331 [2024-11-26 12:56:02.873758] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:45.331 [2024-11-26 12:56:02.873989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:45.331 [2024-11-26 12:56:02.874128] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:45.331 [2024-11-26 12:56:02.874139] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:45.332 [2024-11-26 12:56:02.874263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.332 "name": "raid_bdev1", 00:12:45.332 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:45.332 "strip_size_kb": 0, 00:12:45.332 "state": "online", 00:12:45.332 "raid_level": "raid1", 00:12:45.332 "superblock": false, 00:12:45.332 "num_base_bdevs": 4, 00:12:45.332 "num_base_bdevs_discovered": 4, 00:12:45.332 "num_base_bdevs_operational": 4, 00:12:45.332 "base_bdevs_list": [ 00:12:45.332 { 00:12:45.332 "name": "BaseBdev1", 00:12:45.332 "uuid": "4fde7230-1a9c-5af0-b669-05b03f752fa5", 00:12:45.332 "is_configured": true, 00:12:45.332 "data_offset": 0, 00:12:45.332 "data_size": 65536 00:12:45.332 }, 00:12:45.332 { 00:12:45.332 "name": "BaseBdev2", 00:12:45.332 "uuid": "2a3d39fd-a5b9-513e-b04c-6af580c99f3d", 00:12:45.332 "is_configured": true, 00:12:45.332 "data_offset": 0, 00:12:45.332 "data_size": 65536 00:12:45.332 }, 00:12:45.332 { 00:12:45.332 "name": "BaseBdev3", 00:12:45.332 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:45.332 "is_configured": true, 00:12:45.332 "data_offset": 0, 00:12:45.332 "data_size": 65536 00:12:45.332 }, 00:12:45.332 { 00:12:45.332 "name": "BaseBdev4", 00:12:45.332 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:45.332 "is_configured": true, 00:12:45.332 "data_offset": 0, 00:12:45.332 "data_size": 65536 00:12:45.332 } 00:12:45.332 ] 00:12:45.332 }' 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.332 12:56:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.901 [2024-11-26 12:56:03.335344] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.901 [2024-11-26 12:56:03.430841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.901 "name": "raid_bdev1", 00:12:45.901 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:45.901 "strip_size_kb": 0, 00:12:45.901 "state": "online", 00:12:45.901 "raid_level": "raid1", 00:12:45.901 "superblock": false, 00:12:45.901 "num_base_bdevs": 4, 00:12:45.901 "num_base_bdevs_discovered": 3, 00:12:45.901 "num_base_bdevs_operational": 3, 00:12:45.901 "base_bdevs_list": [ 00:12:45.901 { 00:12:45.901 "name": null, 00:12:45.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.901 "is_configured": false, 00:12:45.901 "data_offset": 0, 00:12:45.901 "data_size": 65536 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "name": "BaseBdev2", 00:12:45.901 "uuid": "2a3d39fd-a5b9-513e-b04c-6af580c99f3d", 00:12:45.901 "is_configured": true, 00:12:45.901 "data_offset": 0, 00:12:45.901 "data_size": 65536 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "name": "BaseBdev3", 00:12:45.901 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:45.901 "is_configured": true, 00:12:45.901 "data_offset": 0, 00:12:45.901 "data_size": 65536 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "name": "BaseBdev4", 00:12:45.901 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:45.901 "is_configured": true, 00:12:45.901 "data_offset": 0, 00:12:45.901 "data_size": 65536 00:12:45.901 } 00:12:45.901 ] 00:12:45.901 }' 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.901 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.901 [2024-11-26 12:56:03.496735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:45.901 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:45.901 Zero copy mechanism will not be used. 00:12:45.901 Running I/O for 60 seconds... 00:12:46.469 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.469 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.469 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.469 [2024-11-26 12:56:03.886685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.469 12:56:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.469 12:56:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:46.470 [2024-11-26 12:56:03.941652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:46.470 [2024-11-26 12:56:03.943674] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.470 [2024-11-26 12:56:04.058318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:46.470 [2024-11-26 12:56:04.058834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:46.729 [2024-11-26 12:56:04.275655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:46.729 [2024-11-26 12:56:04.276054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:46.989 221.00 IOPS, 663.00 MiB/s [2024-11-26T12:56:04.673Z] [2024-11-26 12:56:04.617616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:47.249 [2024-11-26 12:56:04.848764] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:47.249 [2024-11-26 12:56:04.849164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:47.508 12:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.508 12:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.508 12:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.508 12:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.508 12:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.508 12:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.508 12:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.508 12:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.508 12:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.509 12:56:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.509 12:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.509 "name": "raid_bdev1", 00:12:47.509 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:47.509 "strip_size_kb": 0, 00:12:47.509 "state": "online", 00:12:47.509 "raid_level": "raid1", 00:12:47.509 "superblock": false, 00:12:47.509 "num_base_bdevs": 4, 00:12:47.509 "num_base_bdevs_discovered": 4, 00:12:47.509 "num_base_bdevs_operational": 4, 00:12:47.509 "process": { 00:12:47.509 "type": "rebuild", 00:12:47.509 "target": "spare", 00:12:47.509 "progress": { 00:12:47.509 "blocks": 10240, 00:12:47.509 "percent": 15 00:12:47.509 } 00:12:47.509 }, 00:12:47.509 "base_bdevs_list": [ 00:12:47.509 { 00:12:47.509 "name": "spare", 00:12:47.509 "uuid": "c452e8e3-8ee9-557c-a58d-39ebea8ed746", 00:12:47.509 "is_configured": true, 00:12:47.509 "data_offset": 0, 00:12:47.509 "data_size": 65536 00:12:47.509 }, 00:12:47.509 { 00:12:47.509 "name": "BaseBdev2", 00:12:47.509 "uuid": "2a3d39fd-a5b9-513e-b04c-6af580c99f3d", 00:12:47.509 "is_configured": true, 00:12:47.509 "data_offset": 0, 00:12:47.509 "data_size": 65536 00:12:47.509 }, 00:12:47.509 { 00:12:47.509 "name": "BaseBdev3", 00:12:47.509 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:47.509 "is_configured": true, 00:12:47.509 "data_offset": 0, 00:12:47.509 "data_size": 65536 00:12:47.509 }, 00:12:47.509 { 00:12:47.509 "name": "BaseBdev4", 00:12:47.509 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:47.509 "is_configured": true, 00:12:47.509 "data_offset": 0, 00:12:47.509 "data_size": 65536 00:12:47.509 } 00:12:47.509 ] 00:12:47.509 }' 00:12:47.509 12:56:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.509 [2024-11-26 12:56:05.059535] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.509 [2024-11-26 12:56:05.084978] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:47.509 [2024-11-26 12:56:05.100425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.509 [2024-11-26 12:56:05.100535] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.509 [2024-11-26 12:56:05.100568] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:47.509 [2024-11-26 12:56:05.111399] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.509 "name": "raid_bdev1", 00:12:47.509 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:47.509 "strip_size_kb": 0, 00:12:47.509 "state": "online", 00:12:47.509 "raid_level": "raid1", 00:12:47.509 "superblock": false, 00:12:47.509 "num_base_bdevs": 4, 00:12:47.509 "num_base_bdevs_discovered": 3, 00:12:47.509 "num_base_bdevs_operational": 3, 00:12:47.509 "base_bdevs_list": [ 00:12:47.509 { 00:12:47.509 "name": null, 00:12:47.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.509 "is_configured": false, 00:12:47.509 "data_offset": 0, 00:12:47.509 "data_size": 65536 00:12:47.509 }, 00:12:47.509 { 00:12:47.509 "name": "BaseBdev2", 00:12:47.509 "uuid": "2a3d39fd-a5b9-513e-b04c-6af580c99f3d", 00:12:47.509 "is_configured": true, 00:12:47.509 "data_offset": 0, 00:12:47.509 "data_size": 65536 00:12:47.509 }, 00:12:47.509 { 00:12:47.509 "name": "BaseBdev3", 00:12:47.509 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:47.509 "is_configured": true, 00:12:47.509 "data_offset": 0, 00:12:47.509 "data_size": 65536 00:12:47.509 }, 00:12:47.509 { 00:12:47.509 "name": "BaseBdev4", 00:12:47.509 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:47.509 "is_configured": true, 00:12:47.509 "data_offset": 0, 00:12:47.509 "data_size": 65536 00:12:47.509 } 00:12:47.509 ] 00:12:47.509 }' 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.509 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.078 184.50 IOPS, 553.50 MiB/s [2024-11-26T12:56:05.762Z] 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.078 "name": "raid_bdev1", 00:12:48.078 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:48.078 "strip_size_kb": 0, 00:12:48.078 "state": "online", 00:12:48.078 "raid_level": "raid1", 00:12:48.078 "superblock": false, 00:12:48.078 "num_base_bdevs": 4, 00:12:48.078 "num_base_bdevs_discovered": 3, 00:12:48.078 "num_base_bdevs_operational": 3, 00:12:48.078 "base_bdevs_list": [ 00:12:48.078 { 00:12:48.078 "name": null, 00:12:48.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.078 "is_configured": false, 00:12:48.078 "data_offset": 0, 00:12:48.078 "data_size": 65536 00:12:48.078 }, 00:12:48.078 { 00:12:48.078 "name": "BaseBdev2", 00:12:48.078 "uuid": "2a3d39fd-a5b9-513e-b04c-6af580c99f3d", 00:12:48.078 "is_configured": true, 00:12:48.078 "data_offset": 0, 00:12:48.078 "data_size": 65536 00:12:48.078 }, 00:12:48.078 { 00:12:48.078 "name": "BaseBdev3", 00:12:48.078 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:48.078 "is_configured": true, 00:12:48.078 "data_offset": 0, 00:12:48.078 "data_size": 65536 00:12:48.078 }, 00:12:48.078 { 00:12:48.078 "name": "BaseBdev4", 00:12:48.078 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:48.078 "is_configured": true, 00:12:48.078 "data_offset": 0, 00:12:48.078 "data_size": 65536 00:12:48.078 } 00:12:48.078 ] 00:12:48.078 }' 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.078 [2024-11-26 12:56:05.699702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.078 12:56:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:48.338 [2024-11-26 12:56:05.765228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:48.338 [2024-11-26 12:56:05.767094] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.338 [2024-11-26 12:56:05.886302] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:48.338 [2024-11-26 12:56:05.887540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:48.598 [2024-11-26 12:56:06.103786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:48.598 [2024-11-26 12:56:06.104032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:48.857 [2024-11-26 12:56:06.330557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:48.857 [2024-11-26 12:56:06.331963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:49.117 152.00 IOPS, 456.00 MiB/s [2024-11-26T12:56:06.802Z] [2024-11-26 12:56:06.566834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:49.118 [2024-11-26 12:56:06.567538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.118 "name": "raid_bdev1", 00:12:49.118 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:49.118 "strip_size_kb": 0, 00:12:49.118 "state": "online", 00:12:49.118 "raid_level": "raid1", 00:12:49.118 "superblock": false, 00:12:49.118 "num_base_bdevs": 4, 00:12:49.118 "num_base_bdevs_discovered": 4, 00:12:49.118 "num_base_bdevs_operational": 4, 00:12:49.118 "process": { 00:12:49.118 "type": "rebuild", 00:12:49.118 "target": "spare", 00:12:49.118 "progress": { 00:12:49.118 "blocks": 10240, 00:12:49.118 "percent": 15 00:12:49.118 } 00:12:49.118 }, 00:12:49.118 "base_bdevs_list": [ 00:12:49.118 { 00:12:49.118 "name": "spare", 00:12:49.118 "uuid": "c452e8e3-8ee9-557c-a58d-39ebea8ed746", 00:12:49.118 "is_configured": true, 00:12:49.118 "data_offset": 0, 00:12:49.118 "data_size": 65536 00:12:49.118 }, 00:12:49.118 { 00:12:49.118 "name": "BaseBdev2", 00:12:49.118 "uuid": "2a3d39fd-a5b9-513e-b04c-6af580c99f3d", 00:12:49.118 "is_configured": true, 00:12:49.118 "data_offset": 0, 00:12:49.118 "data_size": 65536 00:12:49.118 }, 00:12:49.118 { 00:12:49.118 "name": "BaseBdev3", 00:12:49.118 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:49.118 "is_configured": true, 00:12:49.118 "data_offset": 0, 00:12:49.118 "data_size": 65536 00:12:49.118 }, 00:12:49.118 { 00:12:49.118 "name": "BaseBdev4", 00:12:49.118 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:49.118 "is_configured": true, 00:12:49.118 "data_offset": 0, 00:12:49.118 "data_size": 65536 00:12:49.118 } 00:12:49.118 ] 00:12:49.118 }' 00:12:49.118 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.421 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.421 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.421 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.421 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:49.421 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:49.421 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:49.421 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:49.421 12:56:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:49.421 12:56:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.421 12:56:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.421 [2024-11-26 12:56:06.895266] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.421 [2024-11-26 12:56:06.904677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:49.421 [2024-11-26 12:56:07.011881] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:49.421 [2024-11-26 12:56:07.011991] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:49.421 [2024-11-26 12:56:07.027035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.421 12:56:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.681 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.681 "name": "raid_bdev1", 00:12:49.681 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:49.681 "strip_size_kb": 0, 00:12:49.681 "state": "online", 00:12:49.681 "raid_level": "raid1", 00:12:49.681 "superblock": false, 00:12:49.681 "num_base_bdevs": 4, 00:12:49.681 "num_base_bdevs_discovered": 3, 00:12:49.681 "num_base_bdevs_operational": 3, 00:12:49.681 "process": { 00:12:49.681 "type": "rebuild", 00:12:49.681 "target": "spare", 00:12:49.681 "progress": { 00:12:49.681 "blocks": 14336, 00:12:49.681 "percent": 21 00:12:49.681 } 00:12:49.681 }, 00:12:49.681 "base_bdevs_list": [ 00:12:49.681 { 00:12:49.681 "name": "spare", 00:12:49.681 "uuid": "c452e8e3-8ee9-557c-a58d-39ebea8ed746", 00:12:49.681 "is_configured": true, 00:12:49.681 "data_offset": 0, 00:12:49.681 "data_size": 65536 00:12:49.681 }, 00:12:49.681 { 00:12:49.681 "name": null, 00:12:49.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.681 "is_configured": false, 00:12:49.681 "data_offset": 0, 00:12:49.681 "data_size": 65536 00:12:49.681 }, 00:12:49.681 { 00:12:49.681 "name": "BaseBdev3", 00:12:49.681 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:49.681 "is_configured": true, 00:12:49.681 "data_offset": 0, 00:12:49.681 "data_size": 65536 00:12:49.681 }, 00:12:49.681 { 00:12:49.681 "name": "BaseBdev4", 00:12:49.681 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:49.681 "is_configured": true, 00:12:49.681 "data_offset": 0, 00:12:49.681 "data_size": 65536 00:12:49.681 } 00:12:49.681 ] 00:12:49.681 }' 00:12:49.681 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.681 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.681 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.681 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.681 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=391 00:12:49.681 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.681 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.681 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.682 "name": "raid_bdev1", 00:12:49.682 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:49.682 "strip_size_kb": 0, 00:12:49.682 "state": "online", 00:12:49.682 "raid_level": "raid1", 00:12:49.682 "superblock": false, 00:12:49.682 "num_base_bdevs": 4, 00:12:49.682 "num_base_bdevs_discovered": 3, 00:12:49.682 "num_base_bdevs_operational": 3, 00:12:49.682 "process": { 00:12:49.682 "type": "rebuild", 00:12:49.682 "target": "spare", 00:12:49.682 "progress": { 00:12:49.682 "blocks": 14336, 00:12:49.682 "percent": 21 00:12:49.682 } 00:12:49.682 }, 00:12:49.682 "base_bdevs_list": [ 00:12:49.682 { 00:12:49.682 "name": "spare", 00:12:49.682 "uuid": "c452e8e3-8ee9-557c-a58d-39ebea8ed746", 00:12:49.682 "is_configured": true, 00:12:49.682 "data_offset": 0, 00:12:49.682 "data_size": 65536 00:12:49.682 }, 00:12:49.682 { 00:12:49.682 "name": null, 00:12:49.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.682 "is_configured": false, 00:12:49.682 "data_offset": 0, 00:12:49.682 "data_size": 65536 00:12:49.682 }, 00:12:49.682 { 00:12:49.682 "name": "BaseBdev3", 00:12:49.682 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:49.682 "is_configured": true, 00:12:49.682 "data_offset": 0, 00:12:49.682 "data_size": 65536 00:12:49.682 }, 00:12:49.682 { 00:12:49.682 "name": "BaseBdev4", 00:12:49.682 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:49.682 "is_configured": true, 00:12:49.682 "data_offset": 0, 00:12:49.682 "data_size": 65536 00:12:49.682 } 00:12:49.682 ] 00:12:49.682 }' 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.682 [2024-11-26 12:56:07.253980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:49.682 [2024-11-26 12:56:07.254515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.682 12:56:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.941 131.25 IOPS, 393.75 MiB/s [2024-11-26T12:56:07.625Z] [2024-11-26 12:56:07.586066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:50.200 [2024-11-26 12:56:07.799356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:50.200 [2024-11-26 12:56:07.799573] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:50.459 [2024-11-26 12:56:08.134713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:50.718 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.718 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.718 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.718 [2024-11-26 12:56:08.342664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 2 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.718 4576 offset_end: 30720 00:12:50.718 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.718 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.718 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.718 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.718 12:56:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.719 12:56:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.719 12:56:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.719 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.719 "name": "raid_bdev1", 00:12:50.719 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:50.719 "strip_size_kb": 0, 00:12:50.719 "state": "online", 00:12:50.719 "raid_level": "raid1", 00:12:50.719 "superblock": false, 00:12:50.719 "num_base_bdevs": 4, 00:12:50.719 "num_base_bdevs_discovered": 3, 00:12:50.719 "num_base_bdevs_operational": 3, 00:12:50.719 "process": { 00:12:50.719 "type": "rebuild", 00:12:50.719 "target": "spare", 00:12:50.719 "progress": { 00:12:50.719 "blocks": 28672, 00:12:50.719 "percent": 43 00:12:50.719 } 00:12:50.719 }, 00:12:50.719 "base_bdevs_list": [ 00:12:50.719 { 00:12:50.719 "name": "spare", 00:12:50.719 "uuid": "c452e8e3-8ee9-557c-a58d-39ebea8ed746", 00:12:50.719 "is_configured": true, 00:12:50.719 "data_offset": 0, 00:12:50.719 "data_size": 65536 00:12:50.719 }, 00:12:50.719 { 00:12:50.719 "name": null, 00:12:50.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.719 "is_configured": false, 00:12:50.719 "data_offset": 0, 00:12:50.719 "data_size": 65536 00:12:50.719 }, 00:12:50.719 { 00:12:50.719 "name": "BaseBdev3", 00:12:50.719 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:50.719 "is_configured": true, 00:12:50.719 "data_offset": 0, 00:12:50.719 "data_size": 65536 00:12:50.719 }, 00:12:50.719 { 00:12:50.719 "name": "BaseBdev4", 00:12:50.719 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:50.719 "is_configured": true, 00:12:50.719 "data_offset": 0, 00:12:50.719 "data_size": 65536 00:12:50.719 } 00:12:50.719 ] 00:12:50.719 }' 00:12:50.719 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.979 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.979 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.979 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.979 12:56:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.239 111.20 IOPS, 333.60 MiB/s [2024-11-26T12:56:08.923Z] [2024-11-26 12:56:08.668029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:51.500 [2024-11-26 12:56:09.036840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:51.500 [2024-11-26 12:56:09.037198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:51.759 [2024-11-26 12:56:09.350475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:52.019 [2024-11-26 12:56:09.466090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.019 103.50 IOPS, 310.50 MiB/s [2024-11-26T12:56:09.703Z] 12:56:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.019 "name": "raid_bdev1", 00:12:52.019 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:52.019 "strip_size_kb": 0, 00:12:52.019 "state": "online", 00:12:52.019 "raid_level": "raid1", 00:12:52.019 "superblock": false, 00:12:52.019 "num_base_bdevs": 4, 00:12:52.019 "num_base_bdevs_discovered": 3, 00:12:52.019 "num_base_bdevs_operational": 3, 00:12:52.019 "process": { 00:12:52.019 "type": "rebuild", 00:12:52.019 "target": "spare", 00:12:52.019 "progress": { 00:12:52.019 "blocks": 47104, 00:12:52.019 "percent": 71 00:12:52.019 } 00:12:52.019 }, 00:12:52.019 "base_bdevs_list": [ 00:12:52.019 { 00:12:52.019 "name": "spare", 00:12:52.019 "uuid": "c452e8e3-8ee9-557c-a58d-39ebea8ed746", 00:12:52.019 "is_configured": true, 00:12:52.019 "data_offset": 0, 00:12:52.019 "data_size": 65536 00:12:52.019 }, 00:12:52.019 { 00:12:52.019 "name": null, 00:12:52.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.019 "is_configured": false, 00:12:52.019 "data_offset": 0, 00:12:52.019 "data_size": 65536 00:12:52.019 }, 00:12:52.019 { 00:12:52.019 "name": "BaseBdev3", 00:12:52.019 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:52.019 "is_configured": true, 00:12:52.019 "data_offset": 0, 00:12:52.019 "data_size": 65536 00:12:52.019 }, 00:12:52.019 { 00:12:52.019 "name": "BaseBdev4", 00:12:52.019 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:52.019 "is_configured": true, 00:12:52.019 "data_offset": 0, 00:12:52.019 "data_size": 65536 00:12:52.019 } 00:12:52.019 ] 00:12:52.019 }' 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.019 12:56:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:52.019 [2024-11-26 12:56:09.690161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:52.959 [2024-11-26 12:56:10.465968] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:52.959 94.14 IOPS, 282.43 MiB/s [2024-11-26T12:56:10.643Z] [2024-11-26 12:56:10.571112] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:52.959 [2024-11-26 12:56:10.573218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.959 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.959 "name": "raid_bdev1", 00:12:52.959 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:52.959 "strip_size_kb": 0, 00:12:52.959 "state": "online", 00:12:52.959 "raid_level": "raid1", 00:12:52.959 "superblock": false, 00:12:52.959 "num_base_bdevs": 4, 00:12:52.959 "num_base_bdevs_discovered": 3, 00:12:52.959 "num_base_bdevs_operational": 3, 00:12:52.959 "base_bdevs_list": [ 00:12:52.959 { 00:12:52.959 "name": "spare", 00:12:52.959 "uuid": "c452e8e3-8ee9-557c-a58d-39ebea8ed746", 00:12:52.959 "is_configured": true, 00:12:52.959 "data_offset": 0, 00:12:52.959 "data_size": 65536 00:12:52.959 }, 00:12:52.960 { 00:12:52.960 "name": null, 00:12:52.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.960 "is_configured": false, 00:12:52.960 "data_offset": 0, 00:12:52.960 "data_size": 65536 00:12:52.960 }, 00:12:52.960 { 00:12:52.960 "name": "BaseBdev3", 00:12:52.960 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:52.960 "is_configured": true, 00:12:52.960 "data_offset": 0, 00:12:52.960 "data_size": 65536 00:12:52.960 }, 00:12:52.960 { 00:12:52.960 "name": "BaseBdev4", 00:12:52.960 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:52.960 "is_configured": true, 00:12:52.960 "data_offset": 0, 00:12:52.960 "data_size": 65536 00:12:52.960 } 00:12:52.960 ] 00:12:52.960 }' 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.220 "name": "raid_bdev1", 00:12:53.220 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:53.220 "strip_size_kb": 0, 00:12:53.220 "state": "online", 00:12:53.220 "raid_level": "raid1", 00:12:53.220 "superblock": false, 00:12:53.220 "num_base_bdevs": 4, 00:12:53.220 "num_base_bdevs_discovered": 3, 00:12:53.220 "num_base_bdevs_operational": 3, 00:12:53.220 "base_bdevs_list": [ 00:12:53.220 { 00:12:53.220 "name": "spare", 00:12:53.220 "uuid": "c452e8e3-8ee9-557c-a58d-39ebea8ed746", 00:12:53.220 "is_configured": true, 00:12:53.220 "data_offset": 0, 00:12:53.220 "data_size": 65536 00:12:53.220 }, 00:12:53.220 { 00:12:53.220 "name": null, 00:12:53.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.220 "is_configured": false, 00:12:53.220 "data_offset": 0, 00:12:53.220 "data_size": 65536 00:12:53.220 }, 00:12:53.220 { 00:12:53.220 "name": "BaseBdev3", 00:12:53.220 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:53.220 "is_configured": true, 00:12:53.220 "data_offset": 0, 00:12:53.220 "data_size": 65536 00:12:53.220 }, 00:12:53.220 { 00:12:53.220 "name": "BaseBdev4", 00:12:53.220 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:53.220 "is_configured": true, 00:12:53.220 "data_offset": 0, 00:12:53.220 "data_size": 65536 00:12:53.220 } 00:12:53.220 ] 00:12:53.220 }' 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.220 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.221 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.221 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.221 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.221 12:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.221 12:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.221 12:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.480 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.480 "name": "raid_bdev1", 00:12:53.480 "uuid": "0db64872-079d-435b-b453-7b0604b1dd95", 00:12:53.480 "strip_size_kb": 0, 00:12:53.480 "state": "online", 00:12:53.480 "raid_level": "raid1", 00:12:53.480 "superblock": false, 00:12:53.480 "num_base_bdevs": 4, 00:12:53.480 "num_base_bdevs_discovered": 3, 00:12:53.480 "num_base_bdevs_operational": 3, 00:12:53.480 "base_bdevs_list": [ 00:12:53.480 { 00:12:53.480 "name": "spare", 00:12:53.480 "uuid": "c452e8e3-8ee9-557c-a58d-39ebea8ed746", 00:12:53.480 "is_configured": true, 00:12:53.480 "data_offset": 0, 00:12:53.480 "data_size": 65536 00:12:53.480 }, 00:12:53.480 { 00:12:53.480 "name": null, 00:12:53.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.480 "is_configured": false, 00:12:53.480 "data_offset": 0, 00:12:53.480 "data_size": 65536 00:12:53.480 }, 00:12:53.480 { 00:12:53.480 "name": "BaseBdev3", 00:12:53.480 "uuid": "e9f2a968-d65a-5f96-bdee-9c7406032335", 00:12:53.480 "is_configured": true, 00:12:53.480 "data_offset": 0, 00:12:53.480 "data_size": 65536 00:12:53.480 }, 00:12:53.480 { 00:12:53.480 "name": "BaseBdev4", 00:12:53.480 "uuid": "911278db-78e7-59fe-9698-b9752e2b5aa7", 00:12:53.480 "is_configured": true, 00:12:53.480 "data_offset": 0, 00:12:53.480 "data_size": 65536 00:12:53.480 } 00:12:53.480 ] 00:12:53.480 }' 00:12:53.480 12:56:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.480 12:56:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.740 [2024-11-26 12:56:11.298770] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:53.740 [2024-11-26 12:56:11.298858] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.740 00:12:53.740 Latency(us) 00:12:53.740 [2024-11-26T12:56:11.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.740 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:53.740 raid_bdev1 : 7.85 87.04 261.12 0.00 0.00 15355.43 273.66 116762.83 00:12:53.740 [2024-11-26T12:56:11.424Z] =================================================================================================================== 00:12:53.740 [2024-11-26T12:56:11.424Z] Total : 87.04 261.12 0.00 0.00 15355.43 273.66 116762.83 00:12:53.740 [2024-11-26 12:56:11.334002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.740 [2024-11-26 12:56:11.334074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.740 [2024-11-26 12:56:11.334195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.740 [2024-11-26 12:56:11.334241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:53.740 { 00:12:53.740 "results": [ 00:12:53.740 { 00:12:53.740 "job": "raid_bdev1", 00:12:53.740 "core_mask": "0x1", 00:12:53.740 "workload": "randrw", 00:12:53.740 "percentage": 50, 00:12:53.740 "status": "finished", 00:12:53.740 "queue_depth": 2, 00:12:53.740 "io_size": 3145728, 00:12:53.740 "runtime": 7.846955, 00:12:53.740 "iops": 87.0401321276852, 00:12:53.740 "mibps": 261.1203963830556, 00:12:53.740 "io_failed": 0, 00:12:53.740 "io_timeout": 0, 00:12:53.740 "avg_latency_us": 15355.431488360495, 00:12:53.740 "min_latency_us": 273.6628820960699, 00:12:53.740 "max_latency_us": 116762.82969432314 00:12:53.740 } 00:12:53.740 ], 00:12:53.740 "core_count": 1 00:12:53.740 } 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:53.740 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:54.007 /dev/nbd0 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.007 1+0 records in 00:12:54.007 1+0 records out 00:12:54.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483213 s, 8.5 MB/s 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.007 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:54.268 /dev/nbd1 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.268 1+0 records in 00:12:54.268 1+0 records out 00:12:54.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039316 s, 10.4 MB/s 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.268 12:56:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:54.527 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:54.787 /dev/nbd1 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.787 1+0 records in 00:12:54.787 1+0 records out 00:12:54.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453707 s, 9.0 MB/s 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:54.787 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.047 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89544 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89544 ']' 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89544 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.307 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89544 00:12:55.567 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:55.567 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:55.567 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89544' 00:12:55.567 killing process with pid 89544 00:12:55.567 Received shutdown signal, test time was about 9.515784 seconds 00:12:55.567 00:12:55.567 Latency(us) 00:12:55.567 [2024-11-26T12:56:13.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.567 [2024-11-26T12:56:13.251Z] =================================================================================================================== 00:12:55.567 [2024-11-26T12:56:13.251Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:55.567 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89544 00:12:55.567 [2024-11-26 12:56:12.996229] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.567 12:56:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89544 00:12:55.567 [2024-11-26 12:56:13.041081] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:55.828 00:12:55.828 real 0m11.569s 00:12:55.828 user 0m15.040s 00:12:55.828 sys 0m1.763s 00:12:55.828 ************************************ 00:12:55.828 END TEST raid_rebuild_test_io 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.828 ************************************ 00:12:55.828 12:56:13 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:55.828 12:56:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:55.828 12:56:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.828 12:56:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.828 ************************************ 00:12:55.828 START TEST raid_rebuild_test_sb_io 00:12:55.828 ************************************ 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89933 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89933 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89933 ']' 00:12:55.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:55.828 12:56:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.828 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:55.828 Zero copy mechanism will not be used. 00:12:55.828 [2024-11-26 12:56:13.453251] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:55.828 [2024-11-26 12:56:13.453391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89933 ] 00:12:56.088 [2024-11-26 12:56:13.611563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.088 [2024-11-26 12:56:13.657288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.088 [2024-11-26 12:56:13.701275] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.088 [2024-11-26 12:56:13.701305] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.658 BaseBdev1_malloc 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.658 [2024-11-26 12:56:14.311667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:56.658 [2024-11-26 12:56:14.311725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.658 [2024-11-26 12:56:14.311750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:56.658 [2024-11-26 12:56:14.311763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.658 [2024-11-26 12:56:14.313787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.658 [2024-11-26 12:56:14.313832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.658 BaseBdev1 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.658 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.918 BaseBdev2_malloc 00:12:56.918 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.919 [2024-11-26 12:56:14.353858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:56.919 [2024-11-26 12:56:14.353955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.919 [2024-11-26 12:56:14.353996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:56.919 [2024-11-26 12:56:14.354016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.919 [2024-11-26 12:56:14.358451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.919 [2024-11-26 12:56:14.358518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:56.919 BaseBdev2 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.919 BaseBdev3_malloc 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.919 [2024-11-26 12:56:14.384728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:56.919 [2024-11-26 12:56:14.384815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.919 [2024-11-26 12:56:14.384860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:56.919 [2024-11-26 12:56:14.384868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.919 [2024-11-26 12:56:14.386900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.919 [2024-11-26 12:56:14.386933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:56.919 BaseBdev3 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.919 BaseBdev4_malloc 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.919 [2024-11-26 12:56:14.413378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:56.919 [2024-11-26 12:56:14.413428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.919 [2024-11-26 12:56:14.413449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:56.919 [2024-11-26 12:56:14.413457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.919 [2024-11-26 12:56:14.415364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.919 [2024-11-26 12:56:14.415445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:56.919 BaseBdev4 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.919 spare_malloc 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.919 spare_delay 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.919 [2024-11-26 12:56:14.453965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:56.919 [2024-11-26 12:56:14.454053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.919 [2024-11-26 12:56:14.454093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:56.919 [2024-11-26 12:56:14.454102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.919 [2024-11-26 12:56:14.456163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.919 [2024-11-26 12:56:14.456244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:56.919 spare 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.919 [2024-11-26 12:56:14.466020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.919 [2024-11-26 12:56:14.467790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.919 [2024-11-26 12:56:14.467856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.919 [2024-11-26 12:56:14.467897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:56.919 [2024-11-26 12:56:14.468061] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:56.919 [2024-11-26 12:56:14.468077] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:56.919 [2024-11-26 12:56:14.468301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:56.919 [2024-11-26 12:56:14.468438] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:56.919 [2024-11-26 12:56:14.468450] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:56.919 [2024-11-26 12:56:14.468563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.919 "name": "raid_bdev1", 00:12:56.919 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:12:56.919 "strip_size_kb": 0, 00:12:56.919 "state": "online", 00:12:56.919 "raid_level": "raid1", 00:12:56.919 "superblock": true, 00:12:56.919 "num_base_bdevs": 4, 00:12:56.919 "num_base_bdevs_discovered": 4, 00:12:56.919 "num_base_bdevs_operational": 4, 00:12:56.919 "base_bdevs_list": [ 00:12:56.919 { 00:12:56.919 "name": "BaseBdev1", 00:12:56.919 "uuid": "0d236547-00fc-5c6c-9d35-5557a612ee3c", 00:12:56.919 "is_configured": true, 00:12:56.919 "data_offset": 2048, 00:12:56.919 "data_size": 63488 00:12:56.919 }, 00:12:56.919 { 00:12:56.919 "name": "BaseBdev2", 00:12:56.919 "uuid": "20dbade9-cb3f-52a7-bfd3-211d6eda7132", 00:12:56.919 "is_configured": true, 00:12:56.919 "data_offset": 2048, 00:12:56.919 "data_size": 63488 00:12:56.919 }, 00:12:56.919 { 00:12:56.919 "name": "BaseBdev3", 00:12:56.919 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:12:56.919 "is_configured": true, 00:12:56.919 "data_offset": 2048, 00:12:56.919 "data_size": 63488 00:12:56.919 }, 00:12:56.919 { 00:12:56.919 "name": "BaseBdev4", 00:12:56.919 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:12:56.919 "is_configured": true, 00:12:56.919 "data_offset": 2048, 00:12:56.919 "data_size": 63488 00:12:56.919 } 00:12:56.919 ] 00:12:56.919 }' 00:12:56.919 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.920 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:57.490 [2024-11-26 12:56:14.897526] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.490 [2024-11-26 12:56:14.993060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.490 12:56:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.490 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.490 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.490 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.490 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.490 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.490 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.490 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.490 "name": "raid_bdev1", 00:12:57.490 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:12:57.490 "strip_size_kb": 0, 00:12:57.490 "state": "online", 00:12:57.490 "raid_level": "raid1", 00:12:57.490 "superblock": true, 00:12:57.490 "num_base_bdevs": 4, 00:12:57.490 "num_base_bdevs_discovered": 3, 00:12:57.490 "num_base_bdevs_operational": 3, 00:12:57.490 "base_bdevs_list": [ 00:12:57.490 { 00:12:57.490 "name": null, 00:12:57.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.490 "is_configured": false, 00:12:57.490 "data_offset": 0, 00:12:57.490 "data_size": 63488 00:12:57.490 }, 00:12:57.490 { 00:12:57.490 "name": "BaseBdev2", 00:12:57.490 "uuid": "20dbade9-cb3f-52a7-bfd3-211d6eda7132", 00:12:57.490 "is_configured": true, 00:12:57.490 "data_offset": 2048, 00:12:57.490 "data_size": 63488 00:12:57.490 }, 00:12:57.490 { 00:12:57.490 "name": "BaseBdev3", 00:12:57.491 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:12:57.491 "is_configured": true, 00:12:57.491 "data_offset": 2048, 00:12:57.491 "data_size": 63488 00:12:57.491 }, 00:12:57.491 { 00:12:57.491 "name": "BaseBdev4", 00:12:57.491 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:12:57.491 "is_configured": true, 00:12:57.491 "data_offset": 2048, 00:12:57.491 "data_size": 63488 00:12:57.491 } 00:12:57.491 ] 00:12:57.491 }' 00:12:57.491 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.491 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.491 [2024-11-26 12:56:15.078899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:57.491 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:57.491 Zero copy mechanism will not be used. 00:12:57.491 Running I/O for 60 seconds... 00:12:58.060 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:58.060 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.060 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.060 [2024-11-26 12:56:15.483481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.060 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.060 12:56:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:58.060 [2024-11-26 12:56:15.529450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:58.060 [2024-11-26 12:56:15.531523] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:58.060 [2024-11-26 12:56:15.646276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:58.060 [2024-11-26 12:56:15.646717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:58.320 [2024-11-26 12:56:15.862614] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:58.320 [2024-11-26 12:56:15.862833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:58.579 175.00 IOPS, 525.00 MiB/s [2024-11-26T12:56:16.263Z] [2024-11-26 12:56:16.197337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:58.579 [2024-11-26 12:56:16.197673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:58.838 [2024-11-26 12:56:16.420270] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:58.838 [2024-11-26 12:56:16.420549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.098 "name": "raid_bdev1", 00:12:59.098 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:12:59.098 "strip_size_kb": 0, 00:12:59.098 "state": "online", 00:12:59.098 "raid_level": "raid1", 00:12:59.098 "superblock": true, 00:12:59.098 "num_base_bdevs": 4, 00:12:59.098 "num_base_bdevs_discovered": 4, 00:12:59.098 "num_base_bdevs_operational": 4, 00:12:59.098 "process": { 00:12:59.098 "type": "rebuild", 00:12:59.098 "target": "spare", 00:12:59.098 "progress": { 00:12:59.098 "blocks": 10240, 00:12:59.098 "percent": 16 00:12:59.098 } 00:12:59.098 }, 00:12:59.098 "base_bdevs_list": [ 00:12:59.098 { 00:12:59.098 "name": "spare", 00:12:59.098 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:12:59.098 "is_configured": true, 00:12:59.098 "data_offset": 2048, 00:12:59.098 "data_size": 63488 00:12:59.098 }, 00:12:59.098 { 00:12:59.098 "name": "BaseBdev2", 00:12:59.098 "uuid": "20dbade9-cb3f-52a7-bfd3-211d6eda7132", 00:12:59.098 "is_configured": true, 00:12:59.098 "data_offset": 2048, 00:12:59.098 "data_size": 63488 00:12:59.098 }, 00:12:59.098 { 00:12:59.098 "name": "BaseBdev3", 00:12:59.098 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:12:59.098 "is_configured": true, 00:12:59.098 "data_offset": 2048, 00:12:59.098 "data_size": 63488 00:12:59.098 }, 00:12:59.098 { 00:12:59.098 "name": "BaseBdev4", 00:12:59.098 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:12:59.098 "is_configured": true, 00:12:59.098 "data_offset": 2048, 00:12:59.098 "data_size": 63488 00:12:59.098 } 00:12:59.098 ] 00:12:59.098 }' 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.098 [2024-11-26 12:56:16.658606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:59.098 [2024-11-26 12:56:16.658980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.098 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.098 [2024-11-26 12:56:16.669713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.098 [2024-11-26 12:56:16.773808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:59.358 [2024-11-26 12:56:16.880565] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:59.358 [2024-11-26 12:56:16.884674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.358 [2024-11-26 12:56:16.884715] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.358 [2024-11-26 12:56:16.884729] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:59.358 [2024-11-26 12:56:16.895249] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.358 "name": "raid_bdev1", 00:12:59.358 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:12:59.358 "strip_size_kb": 0, 00:12:59.358 "state": "online", 00:12:59.358 "raid_level": "raid1", 00:12:59.358 "superblock": true, 00:12:59.358 "num_base_bdevs": 4, 00:12:59.358 "num_base_bdevs_discovered": 3, 00:12:59.358 "num_base_bdevs_operational": 3, 00:12:59.358 "base_bdevs_list": [ 00:12:59.358 { 00:12:59.358 "name": null, 00:12:59.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.358 "is_configured": false, 00:12:59.358 "data_offset": 0, 00:12:59.358 "data_size": 63488 00:12:59.358 }, 00:12:59.358 { 00:12:59.358 "name": "BaseBdev2", 00:12:59.358 "uuid": "20dbade9-cb3f-52a7-bfd3-211d6eda7132", 00:12:59.358 "is_configured": true, 00:12:59.358 "data_offset": 2048, 00:12:59.358 "data_size": 63488 00:12:59.358 }, 00:12:59.358 { 00:12:59.358 "name": "BaseBdev3", 00:12:59.358 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:12:59.358 "is_configured": true, 00:12:59.358 "data_offset": 2048, 00:12:59.358 "data_size": 63488 00:12:59.358 }, 00:12:59.358 { 00:12:59.358 "name": "BaseBdev4", 00:12:59.358 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:12:59.358 "is_configured": true, 00:12:59.358 "data_offset": 2048, 00:12:59.358 "data_size": 63488 00:12:59.358 } 00:12:59.358 ] 00:12:59.358 }' 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.358 12:56:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.885 160.00 IOPS, 480.00 MiB/s [2024-11-26T12:56:17.569Z] 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.885 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.885 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.885 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.885 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.885 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.885 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.885 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.885 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.885 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.885 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.885 "name": "raid_bdev1", 00:12:59.885 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:12:59.885 "strip_size_kb": 0, 00:12:59.885 "state": "online", 00:12:59.885 "raid_level": "raid1", 00:12:59.885 "superblock": true, 00:12:59.885 "num_base_bdevs": 4, 00:12:59.885 "num_base_bdevs_discovered": 3, 00:12:59.885 "num_base_bdevs_operational": 3, 00:12:59.885 "base_bdevs_list": [ 00:12:59.885 { 00:12:59.885 "name": null, 00:12:59.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.885 "is_configured": false, 00:12:59.885 "data_offset": 0, 00:12:59.886 "data_size": 63488 00:12:59.886 }, 00:12:59.886 { 00:12:59.886 "name": "BaseBdev2", 00:12:59.886 "uuid": "20dbade9-cb3f-52a7-bfd3-211d6eda7132", 00:12:59.886 "is_configured": true, 00:12:59.886 "data_offset": 2048, 00:12:59.886 "data_size": 63488 00:12:59.886 }, 00:12:59.886 { 00:12:59.886 "name": "BaseBdev3", 00:12:59.886 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:12:59.886 "is_configured": true, 00:12:59.886 "data_offset": 2048, 00:12:59.886 "data_size": 63488 00:12:59.886 }, 00:12:59.886 { 00:12:59.886 "name": "BaseBdev4", 00:12:59.886 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:12:59.886 "is_configured": true, 00:12:59.886 "data_offset": 2048, 00:12:59.886 "data_size": 63488 00:12:59.886 } 00:12:59.886 ] 00:12:59.886 }' 00:12:59.886 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.886 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.886 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.886 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.886 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.887 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.887 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.887 [2024-11-26 12:56:17.524426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.887 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.887 12:56:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:00.150 [2024-11-26 12:56:17.571362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:00.150 [2024-11-26 12:56:17.573398] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:00.150 [2024-11-26 12:56:17.688254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:00.150 [2024-11-26 12:56:17.688704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:00.409 [2024-11-26 12:56:17.899148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.409 [2024-11-26 12:56:17.899531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.668 168.33 IOPS, 505.00 MiB/s [2024-11-26T12:56:18.352Z] [2024-11-26 12:56:18.278625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:00.927 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.928 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.928 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.928 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.928 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.928 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.928 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.928 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.928 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.928 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.187 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.187 "name": "raid_bdev1", 00:13:01.187 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:01.187 "strip_size_kb": 0, 00:13:01.187 "state": "online", 00:13:01.187 "raid_level": "raid1", 00:13:01.187 "superblock": true, 00:13:01.187 "num_base_bdevs": 4, 00:13:01.187 "num_base_bdevs_discovered": 4, 00:13:01.187 "num_base_bdevs_operational": 4, 00:13:01.187 "process": { 00:13:01.187 "type": "rebuild", 00:13:01.187 "target": "spare", 00:13:01.187 "progress": { 00:13:01.187 "blocks": 12288, 00:13:01.187 "percent": 19 00:13:01.187 } 00:13:01.187 }, 00:13:01.187 "base_bdevs_list": [ 00:13:01.187 { 00:13:01.187 "name": "spare", 00:13:01.187 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:01.187 "is_configured": true, 00:13:01.187 "data_offset": 2048, 00:13:01.187 "data_size": 63488 00:13:01.187 }, 00:13:01.187 { 00:13:01.187 "name": "BaseBdev2", 00:13:01.187 "uuid": "20dbade9-cb3f-52a7-bfd3-211d6eda7132", 00:13:01.187 "is_configured": true, 00:13:01.187 "data_offset": 2048, 00:13:01.187 "data_size": 63488 00:13:01.187 }, 00:13:01.187 { 00:13:01.187 "name": "BaseBdev3", 00:13:01.187 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:01.187 "is_configured": true, 00:13:01.187 "data_offset": 2048, 00:13:01.187 "data_size": 63488 00:13:01.187 }, 00:13:01.187 { 00:13:01.187 "name": "BaseBdev4", 00:13:01.187 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:01.187 "is_configured": true, 00:13:01.187 "data_offset": 2048, 00:13:01.187 "data_size": 63488 00:13:01.187 } 00:13:01.187 ] 00:13:01.187 }' 00:13:01.187 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.187 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.188 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.188 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.188 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:01.188 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:01.188 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:01.188 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:01.188 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:01.188 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:01.188 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:01.188 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.188 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 [2024-11-26 12:56:18.699791] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:01.448 [2024-11-26 12:56:18.988277] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:01.448 [2024-11-26 12:56:18.988398] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:01.448 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.448 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:01.448 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:01.448 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.448 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.448 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.448 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.448 12:56:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.448 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.448 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.448 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.448 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.448 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.448 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.448 "name": "raid_bdev1", 00:13:01.448 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:01.448 "strip_size_kb": 0, 00:13:01.448 "state": "online", 00:13:01.448 "raid_level": "raid1", 00:13:01.448 "superblock": true, 00:13:01.448 "num_base_bdevs": 4, 00:13:01.448 "num_base_bdevs_discovered": 3, 00:13:01.448 "num_base_bdevs_operational": 3, 00:13:01.448 "process": { 00:13:01.448 "type": "rebuild", 00:13:01.448 "target": "spare", 00:13:01.448 "progress": { 00:13:01.448 "blocks": 18432, 00:13:01.448 "percent": 29 00:13:01.448 } 00:13:01.448 }, 00:13:01.448 "base_bdevs_list": [ 00:13:01.448 { 00:13:01.448 "name": "spare", 00:13:01.448 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:01.448 "is_configured": true, 00:13:01.448 "data_offset": 2048, 00:13:01.448 "data_size": 63488 00:13:01.448 }, 00:13:01.448 { 00:13:01.448 "name": null, 00:13:01.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.448 "is_configured": false, 00:13:01.448 "data_offset": 0, 00:13:01.448 "data_size": 63488 00:13:01.448 }, 00:13:01.448 { 00:13:01.448 "name": "BaseBdev3", 00:13:01.448 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:01.448 "is_configured": true, 00:13:01.448 "data_offset": 2048, 00:13:01.448 "data_size": 63488 00:13:01.448 }, 00:13:01.448 { 00:13:01.448 "name": "BaseBdev4", 00:13:01.448 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:01.448 "is_configured": true, 00:13:01.448 "data_offset": 2048, 00:13:01.448 "data_size": 63488 00:13:01.448 } 00:13:01.448 ] 00:13:01.448 }' 00:13:01.448 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.448 141.25 IOPS, 423.75 MiB/s [2024-11-26T12:56:19.132Z] 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.448 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.448 [2024-11-26 12:56:19.121836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=403 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.718 "name": "raid_bdev1", 00:13:01.718 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:01.718 "strip_size_kb": 0, 00:13:01.718 "state": "online", 00:13:01.718 "raid_level": "raid1", 00:13:01.718 "superblock": true, 00:13:01.718 "num_base_bdevs": 4, 00:13:01.718 "num_base_bdevs_discovered": 3, 00:13:01.718 "num_base_bdevs_operational": 3, 00:13:01.718 "process": { 00:13:01.718 "type": "rebuild", 00:13:01.718 "target": "spare", 00:13:01.718 "progress": { 00:13:01.718 "blocks": 20480, 00:13:01.718 "percent": 32 00:13:01.718 } 00:13:01.718 }, 00:13:01.718 "base_bdevs_list": [ 00:13:01.718 { 00:13:01.718 "name": "spare", 00:13:01.718 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:01.718 "is_configured": true, 00:13:01.718 "data_offset": 2048, 00:13:01.718 "data_size": 63488 00:13:01.718 }, 00:13:01.718 { 00:13:01.718 "name": null, 00:13:01.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.718 "is_configured": false, 00:13:01.718 "data_offset": 0, 00:13:01.718 "data_size": 63488 00:13:01.718 }, 00:13:01.718 { 00:13:01.718 "name": "BaseBdev3", 00:13:01.718 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:01.718 "is_configured": true, 00:13:01.718 "data_offset": 2048, 00:13:01.718 "data_size": 63488 00:13:01.718 }, 00:13:01.718 { 00:13:01.718 "name": "BaseBdev4", 00:13:01.718 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:01.718 "is_configured": true, 00:13:01.718 "data_offset": 2048, 00:13:01.718 "data_size": 63488 00:13:01.718 } 00:13:01.718 ] 00:13:01.718 }' 00:13:01.718 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.719 [2024-11-26 12:56:19.244329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:01.719 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.719 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.719 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.719 12:56:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:01.981 [2024-11-26 12:56:19.497751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:02.549 [2024-11-26 12:56:19.943366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:02.809 123.80 IOPS, 371.40 MiB/s [2024-11-26T12:56:20.493Z] 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.809 "name": "raid_bdev1", 00:13:02.809 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:02.809 "strip_size_kb": 0, 00:13:02.809 "state": "online", 00:13:02.809 "raid_level": "raid1", 00:13:02.809 "superblock": true, 00:13:02.809 "num_base_bdevs": 4, 00:13:02.809 "num_base_bdevs_discovered": 3, 00:13:02.809 "num_base_bdevs_operational": 3, 00:13:02.809 "process": { 00:13:02.809 "type": "rebuild", 00:13:02.809 "target": "spare", 00:13:02.809 "progress": { 00:13:02.809 "blocks": 38912, 00:13:02.809 "percent": 61 00:13:02.809 } 00:13:02.809 }, 00:13:02.809 "base_bdevs_list": [ 00:13:02.809 { 00:13:02.809 "name": "spare", 00:13:02.809 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:02.809 "is_configured": true, 00:13:02.809 "data_offset": 2048, 00:13:02.809 "data_size": 63488 00:13:02.809 }, 00:13:02.809 { 00:13:02.809 "name": null, 00:13:02.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.809 "is_configured": false, 00:13:02.809 "data_offset": 0, 00:13:02.809 "data_size": 63488 00:13:02.809 }, 00:13:02.809 { 00:13:02.809 "name": "BaseBdev3", 00:13:02.809 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:02.809 "is_configured": true, 00:13:02.809 "data_offset": 2048, 00:13:02.809 "data_size": 63488 00:13:02.809 }, 00:13:02.809 { 00:13:02.809 "name": "BaseBdev4", 00:13:02.809 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:02.809 "is_configured": true, 00:13:02.809 "data_offset": 2048, 00:13:02.809 "data_size": 63488 00:13:02.809 } 00:13:02.809 ] 00:13:02.809 }' 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.809 12:56:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:03.379 [2024-11-26 12:56:20.938232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:03.638 111.50 IOPS, 334.50 MiB/s [2024-11-26T12:56:21.322Z] [2024-11-26 12:56:21.266906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.898 [2024-11-26 12:56:21.480188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.898 "name": "raid_bdev1", 00:13:03.898 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:03.898 "strip_size_kb": 0, 00:13:03.898 "state": "online", 00:13:03.898 "raid_level": "raid1", 00:13:03.898 "superblock": true, 00:13:03.898 "num_base_bdevs": 4, 00:13:03.898 "num_base_bdevs_discovered": 3, 00:13:03.898 "num_base_bdevs_operational": 3, 00:13:03.898 "process": { 00:13:03.898 "type": "rebuild", 00:13:03.898 "target": "spare", 00:13:03.898 "progress": { 00:13:03.898 "blocks": 59392, 00:13:03.898 "percent": 93 00:13:03.898 } 00:13:03.898 }, 00:13:03.898 "base_bdevs_list": [ 00:13:03.898 { 00:13:03.898 "name": "spare", 00:13:03.898 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:03.898 "is_configured": true, 00:13:03.898 "data_offset": 2048, 00:13:03.898 "data_size": 63488 00:13:03.898 }, 00:13:03.898 { 00:13:03.898 "name": null, 00:13:03.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.898 "is_configured": false, 00:13:03.898 "data_offset": 0, 00:13:03.898 "data_size": 63488 00:13:03.898 }, 00:13:03.898 { 00:13:03.898 "name": "BaseBdev3", 00:13:03.898 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:03.898 "is_configured": true, 00:13:03.898 "data_offset": 2048, 00:13:03.898 "data_size": 63488 00:13:03.898 }, 00:13:03.898 { 00:13:03.898 "name": "BaseBdev4", 00:13:03.898 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:03.898 "is_configured": true, 00:13:03.898 "data_offset": 2048, 00:13:03.898 "data_size": 63488 00:13:03.898 } 00:13:03.898 ] 00:13:03.898 }' 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.898 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.158 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.158 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.158 12:56:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.158 [2024-11-26 12:56:21.808351] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:04.417 [2024-11-26 12:56:21.913498] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:04.417 [2024-11-26 12:56:21.916778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.986 100.14 IOPS, 300.43 MiB/s [2024-11-26T12:56:22.670Z] 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.986 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.986 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.986 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.986 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.986 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.986 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.986 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.986 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.986 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.986 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.246 "name": "raid_bdev1", 00:13:05.246 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:05.246 "strip_size_kb": 0, 00:13:05.246 "state": "online", 00:13:05.246 "raid_level": "raid1", 00:13:05.246 "superblock": true, 00:13:05.246 "num_base_bdevs": 4, 00:13:05.246 "num_base_bdevs_discovered": 3, 00:13:05.246 "num_base_bdevs_operational": 3, 00:13:05.246 "base_bdevs_list": [ 00:13:05.246 { 00:13:05.246 "name": "spare", 00:13:05.246 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:05.246 "is_configured": true, 00:13:05.246 "data_offset": 2048, 00:13:05.246 "data_size": 63488 00:13:05.246 }, 00:13:05.246 { 00:13:05.246 "name": null, 00:13:05.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.246 "is_configured": false, 00:13:05.246 "data_offset": 0, 00:13:05.246 "data_size": 63488 00:13:05.246 }, 00:13:05.246 { 00:13:05.246 "name": "BaseBdev3", 00:13:05.246 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:05.246 "is_configured": true, 00:13:05.246 "data_offset": 2048, 00:13:05.246 "data_size": 63488 00:13:05.246 }, 00:13:05.246 { 00:13:05.246 "name": "BaseBdev4", 00:13:05.246 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:05.246 "is_configured": true, 00:13:05.246 "data_offset": 2048, 00:13:05.246 "data_size": 63488 00:13:05.246 } 00:13:05.246 ] 00:13:05.246 }' 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.246 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.246 "name": "raid_bdev1", 00:13:05.246 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:05.246 "strip_size_kb": 0, 00:13:05.246 "state": "online", 00:13:05.246 "raid_level": "raid1", 00:13:05.246 "superblock": true, 00:13:05.246 "num_base_bdevs": 4, 00:13:05.246 "num_base_bdevs_discovered": 3, 00:13:05.246 "num_base_bdevs_operational": 3, 00:13:05.246 "base_bdevs_list": [ 00:13:05.246 { 00:13:05.246 "name": "spare", 00:13:05.246 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:05.246 "is_configured": true, 00:13:05.246 "data_offset": 2048, 00:13:05.246 "data_size": 63488 00:13:05.246 }, 00:13:05.246 { 00:13:05.246 "name": null, 00:13:05.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.246 "is_configured": false, 00:13:05.246 "data_offset": 0, 00:13:05.246 "data_size": 63488 00:13:05.246 }, 00:13:05.246 { 00:13:05.246 "name": "BaseBdev3", 00:13:05.246 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:05.246 "is_configured": true, 00:13:05.246 "data_offset": 2048, 00:13:05.246 "data_size": 63488 00:13:05.246 }, 00:13:05.246 { 00:13:05.246 "name": "BaseBdev4", 00:13:05.246 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:05.246 "is_configured": true, 00:13:05.246 "data_offset": 2048, 00:13:05.246 "data_size": 63488 00:13:05.246 } 00:13:05.246 ] 00:13:05.246 }' 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.247 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.507 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.507 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.507 "name": "raid_bdev1", 00:13:05.507 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:05.507 "strip_size_kb": 0, 00:13:05.507 "state": "online", 00:13:05.507 "raid_level": "raid1", 00:13:05.507 "superblock": true, 00:13:05.507 "num_base_bdevs": 4, 00:13:05.507 "num_base_bdevs_discovered": 3, 00:13:05.507 "num_base_bdevs_operational": 3, 00:13:05.507 "base_bdevs_list": [ 00:13:05.507 { 00:13:05.507 "name": "spare", 00:13:05.507 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:05.507 "is_configured": true, 00:13:05.507 "data_offset": 2048, 00:13:05.507 "data_size": 63488 00:13:05.507 }, 00:13:05.507 { 00:13:05.507 "name": null, 00:13:05.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.507 "is_configured": false, 00:13:05.507 "data_offset": 0, 00:13:05.507 "data_size": 63488 00:13:05.507 }, 00:13:05.507 { 00:13:05.507 "name": "BaseBdev3", 00:13:05.507 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:05.507 "is_configured": true, 00:13:05.507 "data_offset": 2048, 00:13:05.507 "data_size": 63488 00:13:05.507 }, 00:13:05.507 { 00:13:05.507 "name": "BaseBdev4", 00:13:05.507 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:05.507 "is_configured": true, 00:13:05.507 "data_offset": 2048, 00:13:05.507 "data_size": 63488 00:13:05.507 } 00:13:05.507 ] 00:13:05.507 }' 00:13:05.507 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.507 12:56:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.766 92.12 IOPS, 276.38 MiB/s [2024-11-26T12:56:23.450Z] 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.766 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.766 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.766 [2024-11-26 12:56:23.367426] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.766 [2024-11-26 12:56:23.367517] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.766 00:13:05.766 Latency(us) 00:13:05.766 [2024-11-26T12:56:23.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.766 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:05.766 raid_bdev1 : 8.35 89.58 268.73 0.00 0.00 16162.46 273.66 114931.26 00:13:05.766 [2024-11-26T12:56:23.450Z] =================================================================================================================== 00:13:05.766 [2024-11-26T12:56:23.450Z] Total : 89.58 268.73 0.00 0.00 16162.46 273.66 114931.26 00:13:05.766 { 00:13:05.766 "results": [ 00:13:05.766 { 00:13:05.766 "job": "raid_bdev1", 00:13:05.766 "core_mask": "0x1", 00:13:05.766 "workload": "randrw", 00:13:05.766 "percentage": 50, 00:13:05.766 "status": "finished", 00:13:05.766 "queue_depth": 2, 00:13:05.766 "io_size": 3145728, 00:13:05.766 "runtime": 8.350326, 00:13:05.766 "iops": 89.5773410523134, 00:13:05.766 "mibps": 268.7320231569402, 00:13:05.766 "io_failed": 0, 00:13:05.766 "io_timeout": 0, 00:13:05.766 "avg_latency_us": 16162.455353431567, 00:13:05.766 "min_latency_us": 273.6628820960699, 00:13:05.766 "max_latency_us": 114931.2558951965 00:13:05.766 } 00:13:05.766 ], 00:13:05.766 "core_count": 1 00:13:05.766 } 00:13:05.766 [2024-11-26 12:56:23.418379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.766 [2024-11-26 12:56:23.418427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.766 [2024-11-26 12:56:23.418529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.766 [2024-11-26 12:56:23.418544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:05.766 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.766 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:05.766 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.766 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.766 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.766 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:06.026 /dev/nbd0 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:06.026 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.285 1+0 records in 00:13:06.286 1+0 records out 00:13:06.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586257 s, 7.0 MB/s 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:06.286 /dev/nbd1 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:06.286 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.545 1+0 records in 00:13:06.545 1+0 records out 00:13:06.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115565 s, 3.5 MB/s 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.545 12:56:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:06.545 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:06.545 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.545 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:06.545 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.545 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:06.545 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.545 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.805 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:07.065 /dev/nbd1 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.065 1+0 records in 00:13:07.065 1+0 records out 00:13:07.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416541 s, 9.8 MB/s 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.065 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:07.324 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:07.324 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.325 12:56:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.585 [2024-11-26 12:56:25.085018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:07.585 [2024-11-26 12:56:25.085137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.585 [2024-11-26 12:56:25.085160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:07.585 [2024-11-26 12:56:25.085170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.585 [2024-11-26 12:56:25.087396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.585 [2024-11-26 12:56:25.087466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:07.585 [2024-11-26 12:56:25.087590] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:07.585 [2024-11-26 12:56:25.087673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.585 [2024-11-26 12:56:25.087832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.585 [2024-11-26 12:56:25.087981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:07.585 spare 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.585 [2024-11-26 12:56:25.187902] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:07.585 [2024-11-26 12:56:25.187965] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:07.585 [2024-11-26 12:56:25.188250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:13:07.585 [2024-11-26 12:56:25.188387] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:07.585 [2024-11-26 12:56:25.188398] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:07.585 [2024-11-26 12:56:25.188520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.585 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.585 "name": "raid_bdev1", 00:13:07.585 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:07.585 "strip_size_kb": 0, 00:13:07.585 "state": "online", 00:13:07.585 "raid_level": "raid1", 00:13:07.585 "superblock": true, 00:13:07.585 "num_base_bdevs": 4, 00:13:07.585 "num_base_bdevs_discovered": 3, 00:13:07.585 "num_base_bdevs_operational": 3, 00:13:07.585 "base_bdevs_list": [ 00:13:07.585 { 00:13:07.585 "name": "spare", 00:13:07.585 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:07.585 "is_configured": true, 00:13:07.585 "data_offset": 2048, 00:13:07.585 "data_size": 63488 00:13:07.585 }, 00:13:07.585 { 00:13:07.585 "name": null, 00:13:07.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.585 "is_configured": false, 00:13:07.585 "data_offset": 2048, 00:13:07.585 "data_size": 63488 00:13:07.585 }, 00:13:07.585 { 00:13:07.585 "name": "BaseBdev3", 00:13:07.585 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:07.585 "is_configured": true, 00:13:07.585 "data_offset": 2048, 00:13:07.585 "data_size": 63488 00:13:07.585 }, 00:13:07.585 { 00:13:07.585 "name": "BaseBdev4", 00:13:07.585 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:07.585 "is_configured": true, 00:13:07.585 "data_offset": 2048, 00:13:07.586 "data_size": 63488 00:13:07.586 } 00:13:07.586 ] 00:13:07.586 }' 00:13:07.586 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.586 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.152 "name": "raid_bdev1", 00:13:08.152 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:08.152 "strip_size_kb": 0, 00:13:08.152 "state": "online", 00:13:08.152 "raid_level": "raid1", 00:13:08.152 "superblock": true, 00:13:08.152 "num_base_bdevs": 4, 00:13:08.152 "num_base_bdevs_discovered": 3, 00:13:08.152 "num_base_bdevs_operational": 3, 00:13:08.152 "base_bdevs_list": [ 00:13:08.152 { 00:13:08.152 "name": "spare", 00:13:08.152 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:08.152 "is_configured": true, 00:13:08.152 "data_offset": 2048, 00:13:08.152 "data_size": 63488 00:13:08.152 }, 00:13:08.152 { 00:13:08.152 "name": null, 00:13:08.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.152 "is_configured": false, 00:13:08.152 "data_offset": 2048, 00:13:08.152 "data_size": 63488 00:13:08.152 }, 00:13:08.152 { 00:13:08.152 "name": "BaseBdev3", 00:13:08.152 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:08.152 "is_configured": true, 00:13:08.152 "data_offset": 2048, 00:13:08.152 "data_size": 63488 00:13:08.152 }, 00:13:08.152 { 00:13:08.152 "name": "BaseBdev4", 00:13:08.152 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:08.152 "is_configured": true, 00:13:08.152 "data_offset": 2048, 00:13:08.152 "data_size": 63488 00:13:08.152 } 00:13:08.152 ] 00:13:08.152 }' 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.152 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.411 [2024-11-26 12:56:25.843796] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.411 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.411 "name": "raid_bdev1", 00:13:08.411 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:08.411 "strip_size_kb": 0, 00:13:08.411 "state": "online", 00:13:08.411 "raid_level": "raid1", 00:13:08.411 "superblock": true, 00:13:08.411 "num_base_bdevs": 4, 00:13:08.411 "num_base_bdevs_discovered": 2, 00:13:08.412 "num_base_bdevs_operational": 2, 00:13:08.412 "base_bdevs_list": [ 00:13:08.412 { 00:13:08.412 "name": null, 00:13:08.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.412 "is_configured": false, 00:13:08.412 "data_offset": 0, 00:13:08.412 "data_size": 63488 00:13:08.412 }, 00:13:08.412 { 00:13:08.412 "name": null, 00:13:08.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.412 "is_configured": false, 00:13:08.412 "data_offset": 2048, 00:13:08.412 "data_size": 63488 00:13:08.412 }, 00:13:08.412 { 00:13:08.412 "name": "BaseBdev3", 00:13:08.412 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:08.412 "is_configured": true, 00:13:08.412 "data_offset": 2048, 00:13:08.412 "data_size": 63488 00:13:08.412 }, 00:13:08.412 { 00:13:08.412 "name": "BaseBdev4", 00:13:08.412 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:08.412 "is_configured": true, 00:13:08.412 "data_offset": 2048, 00:13:08.412 "data_size": 63488 00:13:08.412 } 00:13:08.412 ] 00:13:08.412 }' 00:13:08.412 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.412 12:56:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.671 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:08.671 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.671 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.671 [2024-11-26 12:56:26.331068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:08.671 [2024-11-26 12:56:26.331281] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:08.671 [2024-11-26 12:56:26.331302] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:08.671 [2024-11-26 12:56:26.331334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:08.671 [2024-11-26 12:56:26.334912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:13:08.671 [2024-11-26 12:56:26.336850] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:08.671 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.671 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.101 "name": "raid_bdev1", 00:13:10.101 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:10.101 "strip_size_kb": 0, 00:13:10.101 "state": "online", 00:13:10.101 "raid_level": "raid1", 00:13:10.101 "superblock": true, 00:13:10.101 "num_base_bdevs": 4, 00:13:10.101 "num_base_bdevs_discovered": 3, 00:13:10.101 "num_base_bdevs_operational": 3, 00:13:10.101 "process": { 00:13:10.101 "type": "rebuild", 00:13:10.101 "target": "spare", 00:13:10.101 "progress": { 00:13:10.101 "blocks": 20480, 00:13:10.101 "percent": 32 00:13:10.101 } 00:13:10.101 }, 00:13:10.101 "base_bdevs_list": [ 00:13:10.101 { 00:13:10.101 "name": "spare", 00:13:10.101 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:10.101 "is_configured": true, 00:13:10.101 "data_offset": 2048, 00:13:10.101 "data_size": 63488 00:13:10.101 }, 00:13:10.101 { 00:13:10.101 "name": null, 00:13:10.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.101 "is_configured": false, 00:13:10.101 "data_offset": 2048, 00:13:10.101 "data_size": 63488 00:13:10.101 }, 00:13:10.101 { 00:13:10.101 "name": "BaseBdev3", 00:13:10.101 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:10.101 "is_configured": true, 00:13:10.101 "data_offset": 2048, 00:13:10.101 "data_size": 63488 00:13:10.101 }, 00:13:10.101 { 00:13:10.101 "name": "BaseBdev4", 00:13:10.101 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:10.101 "is_configured": true, 00:13:10.101 "data_offset": 2048, 00:13:10.101 "data_size": 63488 00:13:10.101 } 00:13:10.101 ] 00:13:10.101 }' 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.101 [2024-11-26 12:56:27.475712] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.101 [2024-11-26 12:56:27.540877] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:10.101 [2024-11-26 12:56:27.540944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.101 [2024-11-26 12:56:27.540960] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.101 [2024-11-26 12:56:27.540969] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.101 "name": "raid_bdev1", 00:13:10.101 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:10.101 "strip_size_kb": 0, 00:13:10.101 "state": "online", 00:13:10.101 "raid_level": "raid1", 00:13:10.101 "superblock": true, 00:13:10.101 "num_base_bdevs": 4, 00:13:10.101 "num_base_bdevs_discovered": 2, 00:13:10.101 "num_base_bdevs_operational": 2, 00:13:10.101 "base_bdevs_list": [ 00:13:10.101 { 00:13:10.101 "name": null, 00:13:10.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.101 "is_configured": false, 00:13:10.101 "data_offset": 0, 00:13:10.101 "data_size": 63488 00:13:10.101 }, 00:13:10.101 { 00:13:10.101 "name": null, 00:13:10.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.101 "is_configured": false, 00:13:10.101 "data_offset": 2048, 00:13:10.101 "data_size": 63488 00:13:10.101 }, 00:13:10.101 { 00:13:10.101 "name": "BaseBdev3", 00:13:10.101 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:10.101 "is_configured": true, 00:13:10.101 "data_offset": 2048, 00:13:10.101 "data_size": 63488 00:13:10.101 }, 00:13:10.101 { 00:13:10.101 "name": "BaseBdev4", 00:13:10.101 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:10.101 "is_configured": true, 00:13:10.101 "data_offset": 2048, 00:13:10.101 "data_size": 63488 00:13:10.101 } 00:13:10.101 ] 00:13:10.101 }' 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.101 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.360 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.360 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.360 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.360 [2024-11-26 12:56:28.019925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.360 [2024-11-26 12:56:28.020038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.360 [2024-11-26 12:56:28.020080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:10.360 [2024-11-26 12:56:28.020112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.360 [2024-11-26 12:56:28.020553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.360 [2024-11-26 12:56:28.020615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.360 [2024-11-26 12:56:28.020722] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:10.360 [2024-11-26 12:56:28.020763] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:10.360 [2024-11-26 12:56:28.020801] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:10.360 [2024-11-26 12:56:28.020855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.360 [2024-11-26 12:56:28.023940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:10.360 spare 00:13:10.360 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.360 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:10.360 [2024-11-26 12:56:28.025849] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.741 "name": "raid_bdev1", 00:13:11.741 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:11.741 "strip_size_kb": 0, 00:13:11.741 "state": "online", 00:13:11.741 "raid_level": "raid1", 00:13:11.741 "superblock": true, 00:13:11.741 "num_base_bdevs": 4, 00:13:11.741 "num_base_bdevs_discovered": 3, 00:13:11.741 "num_base_bdevs_operational": 3, 00:13:11.741 "process": { 00:13:11.741 "type": "rebuild", 00:13:11.741 "target": "spare", 00:13:11.741 "progress": { 00:13:11.741 "blocks": 20480, 00:13:11.741 "percent": 32 00:13:11.741 } 00:13:11.741 }, 00:13:11.741 "base_bdevs_list": [ 00:13:11.741 { 00:13:11.741 "name": "spare", 00:13:11.741 "uuid": "6fc977e6-69c1-5aec-b800-f299886e9700", 00:13:11.741 "is_configured": true, 00:13:11.741 "data_offset": 2048, 00:13:11.741 "data_size": 63488 00:13:11.741 }, 00:13:11.741 { 00:13:11.741 "name": null, 00:13:11.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.741 "is_configured": false, 00:13:11.741 "data_offset": 2048, 00:13:11.741 "data_size": 63488 00:13:11.741 }, 00:13:11.741 { 00:13:11.741 "name": "BaseBdev3", 00:13:11.741 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:11.741 "is_configured": true, 00:13:11.741 "data_offset": 2048, 00:13:11.741 "data_size": 63488 00:13:11.741 }, 00:13:11.741 { 00:13:11.741 "name": "BaseBdev4", 00:13:11.741 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:11.741 "is_configured": true, 00:13:11.741 "data_offset": 2048, 00:13:11.741 "data_size": 63488 00:13:11.741 } 00:13:11.741 ] 00:13:11.741 }' 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.741 [2024-11-26 12:56:29.187732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.741 [2024-11-26 12:56:29.229806] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:11.741 [2024-11-26 12:56:29.229858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.741 [2024-11-26 12:56:29.229876] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.741 [2024-11-26 12:56:29.229883] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.741 "name": "raid_bdev1", 00:13:11.741 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:11.741 "strip_size_kb": 0, 00:13:11.741 "state": "online", 00:13:11.741 "raid_level": "raid1", 00:13:11.741 "superblock": true, 00:13:11.741 "num_base_bdevs": 4, 00:13:11.741 "num_base_bdevs_discovered": 2, 00:13:11.741 "num_base_bdevs_operational": 2, 00:13:11.741 "base_bdevs_list": [ 00:13:11.741 { 00:13:11.741 "name": null, 00:13:11.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.741 "is_configured": false, 00:13:11.741 "data_offset": 0, 00:13:11.741 "data_size": 63488 00:13:11.741 }, 00:13:11.741 { 00:13:11.741 "name": null, 00:13:11.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.741 "is_configured": false, 00:13:11.741 "data_offset": 2048, 00:13:11.741 "data_size": 63488 00:13:11.741 }, 00:13:11.741 { 00:13:11.741 "name": "BaseBdev3", 00:13:11.741 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:11.741 "is_configured": true, 00:13:11.741 "data_offset": 2048, 00:13:11.741 "data_size": 63488 00:13:11.741 }, 00:13:11.741 { 00:13:11.741 "name": "BaseBdev4", 00:13:11.741 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:11.741 "is_configured": true, 00:13:11.741 "data_offset": 2048, 00:13:11.741 "data_size": 63488 00:13:11.741 } 00:13:11.741 ] 00:13:11.741 }' 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.741 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.310 "name": "raid_bdev1", 00:13:12.310 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:12.310 "strip_size_kb": 0, 00:13:12.310 "state": "online", 00:13:12.310 "raid_level": "raid1", 00:13:12.310 "superblock": true, 00:13:12.310 "num_base_bdevs": 4, 00:13:12.310 "num_base_bdevs_discovered": 2, 00:13:12.310 "num_base_bdevs_operational": 2, 00:13:12.310 "base_bdevs_list": [ 00:13:12.310 { 00:13:12.310 "name": null, 00:13:12.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.310 "is_configured": false, 00:13:12.310 "data_offset": 0, 00:13:12.310 "data_size": 63488 00:13:12.310 }, 00:13:12.310 { 00:13:12.310 "name": null, 00:13:12.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.310 "is_configured": false, 00:13:12.310 "data_offset": 2048, 00:13:12.310 "data_size": 63488 00:13:12.310 }, 00:13:12.310 { 00:13:12.310 "name": "BaseBdev3", 00:13:12.310 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:12.310 "is_configured": true, 00:13:12.310 "data_offset": 2048, 00:13:12.310 "data_size": 63488 00:13:12.310 }, 00:13:12.310 { 00:13:12.310 "name": "BaseBdev4", 00:13:12.310 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:12.310 "is_configured": true, 00:13:12.310 "data_offset": 2048, 00:13:12.310 "data_size": 63488 00:13:12.310 } 00:13:12.310 ] 00:13:12.310 }' 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.310 [2024-11-26 12:56:29.864535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:12.310 [2024-11-26 12:56:29.864581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.310 [2024-11-26 12:56:29.864600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:12.310 [2024-11-26 12:56:29.864608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.310 [2024-11-26 12:56:29.864977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.310 [2024-11-26 12:56:29.864994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:12.310 [2024-11-26 12:56:29.865059] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:12.310 [2024-11-26 12:56:29.865082] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:12.310 [2024-11-26 12:56:29.865092] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:12.310 [2024-11-26 12:56:29.865101] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:12.310 BaseBdev1 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.310 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.249 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.249 "name": "raid_bdev1", 00:13:13.249 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:13.249 "strip_size_kb": 0, 00:13:13.249 "state": "online", 00:13:13.249 "raid_level": "raid1", 00:13:13.249 "superblock": true, 00:13:13.249 "num_base_bdevs": 4, 00:13:13.249 "num_base_bdevs_discovered": 2, 00:13:13.249 "num_base_bdevs_operational": 2, 00:13:13.249 "base_bdevs_list": [ 00:13:13.249 { 00:13:13.249 "name": null, 00:13:13.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.249 "is_configured": false, 00:13:13.249 "data_offset": 0, 00:13:13.249 "data_size": 63488 00:13:13.249 }, 00:13:13.249 { 00:13:13.249 "name": null, 00:13:13.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.249 "is_configured": false, 00:13:13.249 "data_offset": 2048, 00:13:13.249 "data_size": 63488 00:13:13.249 }, 00:13:13.249 { 00:13:13.249 "name": "BaseBdev3", 00:13:13.249 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:13.249 "is_configured": true, 00:13:13.249 "data_offset": 2048, 00:13:13.249 "data_size": 63488 00:13:13.249 }, 00:13:13.249 { 00:13:13.249 "name": "BaseBdev4", 00:13:13.249 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:13.249 "is_configured": true, 00:13:13.249 "data_offset": 2048, 00:13:13.250 "data_size": 63488 00:13:13.250 } 00:13:13.250 ] 00:13:13.250 }' 00:13:13.250 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.250 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.817 "name": "raid_bdev1", 00:13:13.817 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:13.817 "strip_size_kb": 0, 00:13:13.817 "state": "online", 00:13:13.817 "raid_level": "raid1", 00:13:13.817 "superblock": true, 00:13:13.817 "num_base_bdevs": 4, 00:13:13.817 "num_base_bdevs_discovered": 2, 00:13:13.817 "num_base_bdevs_operational": 2, 00:13:13.817 "base_bdevs_list": [ 00:13:13.817 { 00:13:13.817 "name": null, 00:13:13.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.817 "is_configured": false, 00:13:13.817 "data_offset": 0, 00:13:13.817 "data_size": 63488 00:13:13.817 }, 00:13:13.817 { 00:13:13.817 "name": null, 00:13:13.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.817 "is_configured": false, 00:13:13.817 "data_offset": 2048, 00:13:13.817 "data_size": 63488 00:13:13.817 }, 00:13:13.817 { 00:13:13.817 "name": "BaseBdev3", 00:13:13.817 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:13.817 "is_configured": true, 00:13:13.817 "data_offset": 2048, 00:13:13.817 "data_size": 63488 00:13:13.817 }, 00:13:13.817 { 00:13:13.817 "name": "BaseBdev4", 00:13:13.817 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:13.817 "is_configured": true, 00:13:13.817 "data_offset": 2048, 00:13:13.817 "data_size": 63488 00:13:13.817 } 00:13:13.817 ] 00:13:13.817 }' 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.817 [2024-11-26 12:56:31.438276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.817 [2024-11-26 12:56:31.438467] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:13.817 [2024-11-26 12:56:31.438525] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:13.817 request: 00:13:13.817 { 00:13:13.817 "base_bdev": "BaseBdev1", 00:13:13.817 "raid_bdev": "raid_bdev1", 00:13:13.817 "method": "bdev_raid_add_base_bdev", 00:13:13.817 "req_id": 1 00:13:13.817 } 00:13:13.817 Got JSON-RPC error response 00:13:13.817 response: 00:13:13.817 { 00:13:13.817 "code": -22, 00:13:13.817 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:13.817 } 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:13.817 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.198 "name": "raid_bdev1", 00:13:15.198 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:15.198 "strip_size_kb": 0, 00:13:15.198 "state": "online", 00:13:15.198 "raid_level": "raid1", 00:13:15.198 "superblock": true, 00:13:15.198 "num_base_bdevs": 4, 00:13:15.198 "num_base_bdevs_discovered": 2, 00:13:15.198 "num_base_bdevs_operational": 2, 00:13:15.198 "base_bdevs_list": [ 00:13:15.198 { 00:13:15.198 "name": null, 00:13:15.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.198 "is_configured": false, 00:13:15.198 "data_offset": 0, 00:13:15.198 "data_size": 63488 00:13:15.198 }, 00:13:15.198 { 00:13:15.198 "name": null, 00:13:15.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.198 "is_configured": false, 00:13:15.198 "data_offset": 2048, 00:13:15.198 "data_size": 63488 00:13:15.198 }, 00:13:15.198 { 00:13:15.198 "name": "BaseBdev3", 00:13:15.198 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:15.198 "is_configured": true, 00:13:15.198 "data_offset": 2048, 00:13:15.198 "data_size": 63488 00:13:15.198 }, 00:13:15.198 { 00:13:15.198 "name": "BaseBdev4", 00:13:15.198 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:15.198 "is_configured": true, 00:13:15.198 "data_offset": 2048, 00:13:15.198 "data_size": 63488 00:13:15.198 } 00:13:15.198 ] 00:13:15.198 }' 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.198 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.459 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.459 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.459 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.460 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.460 "name": "raid_bdev1", 00:13:15.460 "uuid": "bfc817be-6228-431d-b9a6-02d5eb6d020b", 00:13:15.460 "strip_size_kb": 0, 00:13:15.460 "state": "online", 00:13:15.460 "raid_level": "raid1", 00:13:15.460 "superblock": true, 00:13:15.460 "num_base_bdevs": 4, 00:13:15.460 "num_base_bdevs_discovered": 2, 00:13:15.460 "num_base_bdevs_operational": 2, 00:13:15.460 "base_bdevs_list": [ 00:13:15.460 { 00:13:15.460 "name": null, 00:13:15.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.460 "is_configured": false, 00:13:15.460 "data_offset": 0, 00:13:15.460 "data_size": 63488 00:13:15.460 }, 00:13:15.460 { 00:13:15.460 "name": null, 00:13:15.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.460 "is_configured": false, 00:13:15.460 "data_offset": 2048, 00:13:15.460 "data_size": 63488 00:13:15.460 }, 00:13:15.460 { 00:13:15.460 "name": "BaseBdev3", 00:13:15.460 "uuid": "79bdbab8-404f-5c1d-b776-12cd0ba35de6", 00:13:15.460 "is_configured": true, 00:13:15.460 "data_offset": 2048, 00:13:15.460 "data_size": 63488 00:13:15.460 }, 00:13:15.460 { 00:13:15.460 "name": "BaseBdev4", 00:13:15.460 "uuid": "77fe06ea-b1aa-5a26-b696-6cb8ed36e02d", 00:13:15.460 "is_configured": true, 00:13:15.460 "data_offset": 2048, 00:13:15.460 "data_size": 63488 00:13:15.460 } 00:13:15.460 ] 00:13:15.460 }' 00:13:15.460 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.460 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.460 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.460 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.460 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89933 00:13:15.460 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89933 ']' 00:13:15.460 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89933 00:13:15.460 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:15.460 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.460 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89933 00:13:15.460 killing process with pid 89933 00:13:15.460 Received shutdown signal, test time was about 17.991128 seconds 00:13:15.460 00:13:15.460 Latency(us) 00:13:15.460 [2024-11-26T12:56:33.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.460 [2024-11-26T12:56:33.144Z] =================================================================================================================== 00:13:15.460 [2024-11-26T12:56:33.144Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:15.460 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.460 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.460 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89933' 00:13:15.460 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89933 00:13:15.460 [2024-11-26 12:56:33.037509] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:15.460 [2024-11-26 12:56:33.037653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.460 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89933 00:13:15.460 [2024-11-26 12:56:33.037717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.460 [2024-11-26 12:56:33.037729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:15.460 [2024-11-26 12:56:33.082997] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.719 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:15.719 00:13:15.719 real 0m19.968s 00:13:15.719 user 0m26.604s 00:13:15.719 sys 0m2.664s 00:13:15.719 ************************************ 00:13:15.719 END TEST raid_rebuild_test_sb_io 00:13:15.719 ************************************ 00:13:15.719 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.719 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.719 12:56:33 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:15.719 12:56:33 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:15.719 12:56:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:15.719 12:56:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.720 12:56:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.980 ************************************ 00:13:15.980 START TEST raid5f_state_function_test 00:13:15.980 ************************************ 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:15.980 Process raid pid: 90645 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90645 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90645' 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90645 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90645 ']' 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.980 12:56:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.980 [2024-11-26 12:56:33.501465] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:15.980 [2024-11-26 12:56:33.501711] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.240 [2024-11-26 12:56:33.664137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.240 [2024-11-26 12:56:33.711824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.240 [2024-11-26 12:56:33.754583] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.240 [2024-11-26 12:56:33.754680] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.807 [2024-11-26 12:56:34.340356] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:16.807 [2024-11-26 12:56:34.340452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:16.807 [2024-11-26 12:56:34.340484] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.807 [2024-11-26 12:56:34.340507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.807 [2024-11-26 12:56:34.340524] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:16.807 [2024-11-26 12:56:34.340547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.807 "name": "Existed_Raid", 00:13:16.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.807 "strip_size_kb": 64, 00:13:16.807 "state": "configuring", 00:13:16.807 "raid_level": "raid5f", 00:13:16.807 "superblock": false, 00:13:16.807 "num_base_bdevs": 3, 00:13:16.807 "num_base_bdevs_discovered": 0, 00:13:16.807 "num_base_bdevs_operational": 3, 00:13:16.807 "base_bdevs_list": [ 00:13:16.807 { 00:13:16.807 "name": "BaseBdev1", 00:13:16.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.807 "is_configured": false, 00:13:16.807 "data_offset": 0, 00:13:16.807 "data_size": 0 00:13:16.807 }, 00:13:16.807 { 00:13:16.807 "name": "BaseBdev2", 00:13:16.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.807 "is_configured": false, 00:13:16.807 "data_offset": 0, 00:13:16.807 "data_size": 0 00:13:16.807 }, 00:13:16.807 { 00:13:16.807 "name": "BaseBdev3", 00:13:16.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.807 "is_configured": false, 00:13:16.807 "data_offset": 0, 00:13:16.807 "data_size": 0 00:13:16.807 } 00:13:16.807 ] 00:13:16.807 }' 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.807 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.375 [2024-11-26 12:56:34.771596] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:17.375 [2024-11-26 12:56:34.771632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.375 [2024-11-26 12:56:34.779633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.375 [2024-11-26 12:56:34.779670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.375 [2024-11-26 12:56:34.779677] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:17.375 [2024-11-26 12:56:34.779686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:17.375 [2024-11-26 12:56:34.779692] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:17.375 [2024-11-26 12:56:34.779702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.375 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.375 [2024-11-26 12:56:34.800438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.375 BaseBdev1 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.376 [ 00:13:17.376 { 00:13:17.376 "name": "BaseBdev1", 00:13:17.376 "aliases": [ 00:13:17.376 "672bb1b9-caa4-441a-8114-4c95508a5d47" 00:13:17.376 ], 00:13:17.376 "product_name": "Malloc disk", 00:13:17.376 "block_size": 512, 00:13:17.376 "num_blocks": 65536, 00:13:17.376 "uuid": "672bb1b9-caa4-441a-8114-4c95508a5d47", 00:13:17.376 "assigned_rate_limits": { 00:13:17.376 "rw_ios_per_sec": 0, 00:13:17.376 "rw_mbytes_per_sec": 0, 00:13:17.376 "r_mbytes_per_sec": 0, 00:13:17.376 "w_mbytes_per_sec": 0 00:13:17.376 }, 00:13:17.376 "claimed": true, 00:13:17.376 "claim_type": "exclusive_write", 00:13:17.376 "zoned": false, 00:13:17.376 "supported_io_types": { 00:13:17.376 "read": true, 00:13:17.376 "write": true, 00:13:17.376 "unmap": true, 00:13:17.376 "flush": true, 00:13:17.376 "reset": true, 00:13:17.376 "nvme_admin": false, 00:13:17.376 "nvme_io": false, 00:13:17.376 "nvme_io_md": false, 00:13:17.376 "write_zeroes": true, 00:13:17.376 "zcopy": true, 00:13:17.376 "get_zone_info": false, 00:13:17.376 "zone_management": false, 00:13:17.376 "zone_append": false, 00:13:17.376 "compare": false, 00:13:17.376 "compare_and_write": false, 00:13:17.376 "abort": true, 00:13:17.376 "seek_hole": false, 00:13:17.376 "seek_data": false, 00:13:17.376 "copy": true, 00:13:17.376 "nvme_iov_md": false 00:13:17.376 }, 00:13:17.376 "memory_domains": [ 00:13:17.376 { 00:13:17.376 "dma_device_id": "system", 00:13:17.376 "dma_device_type": 1 00:13:17.376 }, 00:13:17.376 { 00:13:17.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.376 "dma_device_type": 2 00:13:17.376 } 00:13:17.376 ], 00:13:17.376 "driver_specific": {} 00:13:17.376 } 00:13:17.376 ] 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.376 "name": "Existed_Raid", 00:13:17.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.376 "strip_size_kb": 64, 00:13:17.376 "state": "configuring", 00:13:17.376 "raid_level": "raid5f", 00:13:17.376 "superblock": false, 00:13:17.376 "num_base_bdevs": 3, 00:13:17.376 "num_base_bdevs_discovered": 1, 00:13:17.376 "num_base_bdevs_operational": 3, 00:13:17.376 "base_bdevs_list": [ 00:13:17.376 { 00:13:17.376 "name": "BaseBdev1", 00:13:17.376 "uuid": "672bb1b9-caa4-441a-8114-4c95508a5d47", 00:13:17.376 "is_configured": true, 00:13:17.376 "data_offset": 0, 00:13:17.376 "data_size": 65536 00:13:17.376 }, 00:13:17.376 { 00:13:17.376 "name": "BaseBdev2", 00:13:17.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.376 "is_configured": false, 00:13:17.376 "data_offset": 0, 00:13:17.376 "data_size": 0 00:13:17.376 }, 00:13:17.376 { 00:13:17.376 "name": "BaseBdev3", 00:13:17.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.376 "is_configured": false, 00:13:17.376 "data_offset": 0, 00:13:17.376 "data_size": 0 00:13:17.376 } 00:13:17.376 ] 00:13:17.376 }' 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.376 12:56:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.635 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:17.635 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.635 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.635 [2024-11-26 12:56:35.299655] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:17.635 [2024-11-26 12:56:35.299736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:17.635 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.635 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:17.635 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.635 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.635 [2024-11-26 12:56:35.311697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.894 [2024-11-26 12:56:35.313617] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:17.894 [2024-11-26 12:56:35.313691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:17.894 [2024-11-26 12:56:35.313725] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:17.894 [2024-11-26 12:56:35.313765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.894 "name": "Existed_Raid", 00:13:17.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.894 "strip_size_kb": 64, 00:13:17.894 "state": "configuring", 00:13:17.894 "raid_level": "raid5f", 00:13:17.894 "superblock": false, 00:13:17.894 "num_base_bdevs": 3, 00:13:17.894 "num_base_bdevs_discovered": 1, 00:13:17.894 "num_base_bdevs_operational": 3, 00:13:17.894 "base_bdevs_list": [ 00:13:17.894 { 00:13:17.894 "name": "BaseBdev1", 00:13:17.894 "uuid": "672bb1b9-caa4-441a-8114-4c95508a5d47", 00:13:17.894 "is_configured": true, 00:13:17.894 "data_offset": 0, 00:13:17.894 "data_size": 65536 00:13:17.894 }, 00:13:17.894 { 00:13:17.894 "name": "BaseBdev2", 00:13:17.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.894 "is_configured": false, 00:13:17.894 "data_offset": 0, 00:13:17.894 "data_size": 0 00:13:17.894 }, 00:13:17.894 { 00:13:17.894 "name": "BaseBdev3", 00:13:17.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.894 "is_configured": false, 00:13:17.894 "data_offset": 0, 00:13:17.894 "data_size": 0 00:13:17.894 } 00:13:17.894 ] 00:13:17.894 }' 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.894 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.154 [2024-11-26 12:56:35.783259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.154 BaseBdev2 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.154 [ 00:13:18.154 { 00:13:18.154 "name": "BaseBdev2", 00:13:18.154 "aliases": [ 00:13:18.154 "84284da7-7d9d-481e-938d-1453dfe58ef1" 00:13:18.154 ], 00:13:18.154 "product_name": "Malloc disk", 00:13:18.154 "block_size": 512, 00:13:18.154 "num_blocks": 65536, 00:13:18.154 "uuid": "84284da7-7d9d-481e-938d-1453dfe58ef1", 00:13:18.154 "assigned_rate_limits": { 00:13:18.154 "rw_ios_per_sec": 0, 00:13:18.154 "rw_mbytes_per_sec": 0, 00:13:18.154 "r_mbytes_per_sec": 0, 00:13:18.154 "w_mbytes_per_sec": 0 00:13:18.154 }, 00:13:18.154 "claimed": true, 00:13:18.154 "claim_type": "exclusive_write", 00:13:18.154 "zoned": false, 00:13:18.154 "supported_io_types": { 00:13:18.154 "read": true, 00:13:18.154 "write": true, 00:13:18.154 "unmap": true, 00:13:18.154 "flush": true, 00:13:18.154 "reset": true, 00:13:18.154 "nvme_admin": false, 00:13:18.154 "nvme_io": false, 00:13:18.154 "nvme_io_md": false, 00:13:18.154 "write_zeroes": true, 00:13:18.154 "zcopy": true, 00:13:18.154 "get_zone_info": false, 00:13:18.154 "zone_management": false, 00:13:18.154 "zone_append": false, 00:13:18.154 "compare": false, 00:13:18.154 "compare_and_write": false, 00:13:18.154 "abort": true, 00:13:18.154 "seek_hole": false, 00:13:18.154 "seek_data": false, 00:13:18.154 "copy": true, 00:13:18.154 "nvme_iov_md": false 00:13:18.154 }, 00:13:18.154 "memory_domains": [ 00:13:18.154 { 00:13:18.154 "dma_device_id": "system", 00:13:18.154 "dma_device_type": 1 00:13:18.154 }, 00:13:18.154 { 00:13:18.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.154 "dma_device_type": 2 00:13:18.154 } 00:13:18.154 ], 00:13:18.154 "driver_specific": {} 00:13:18.154 } 00:13:18.154 ] 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.154 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.155 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.155 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.155 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.414 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.414 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.414 "name": "Existed_Raid", 00:13:18.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.414 "strip_size_kb": 64, 00:13:18.414 "state": "configuring", 00:13:18.414 "raid_level": "raid5f", 00:13:18.414 "superblock": false, 00:13:18.414 "num_base_bdevs": 3, 00:13:18.414 "num_base_bdevs_discovered": 2, 00:13:18.414 "num_base_bdevs_operational": 3, 00:13:18.414 "base_bdevs_list": [ 00:13:18.414 { 00:13:18.414 "name": "BaseBdev1", 00:13:18.414 "uuid": "672bb1b9-caa4-441a-8114-4c95508a5d47", 00:13:18.414 "is_configured": true, 00:13:18.414 "data_offset": 0, 00:13:18.414 "data_size": 65536 00:13:18.414 }, 00:13:18.414 { 00:13:18.414 "name": "BaseBdev2", 00:13:18.414 "uuid": "84284da7-7d9d-481e-938d-1453dfe58ef1", 00:13:18.414 "is_configured": true, 00:13:18.414 "data_offset": 0, 00:13:18.414 "data_size": 65536 00:13:18.414 }, 00:13:18.414 { 00:13:18.414 "name": "BaseBdev3", 00:13:18.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.414 "is_configured": false, 00:13:18.414 "data_offset": 0, 00:13:18.414 "data_size": 0 00:13:18.414 } 00:13:18.414 ] 00:13:18.414 }' 00:13:18.414 12:56:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.414 12:56:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.674 [2024-11-26 12:56:36.281423] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.674 [2024-11-26 12:56:36.281548] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:18.674 [2024-11-26 12:56:36.281579] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:18.674 [2024-11-26 12:56:36.281893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:18.674 [2024-11-26 12:56:36.282347] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:18.674 [2024-11-26 12:56:36.282360] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:18.674 [2024-11-26 12:56:36.282586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.674 BaseBdev3 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.674 [ 00:13:18.674 { 00:13:18.674 "name": "BaseBdev3", 00:13:18.674 "aliases": [ 00:13:18.674 "01717877-bcf3-42ad-b640-ee064579a7fb" 00:13:18.674 ], 00:13:18.674 "product_name": "Malloc disk", 00:13:18.674 "block_size": 512, 00:13:18.674 "num_blocks": 65536, 00:13:18.674 "uuid": "01717877-bcf3-42ad-b640-ee064579a7fb", 00:13:18.674 "assigned_rate_limits": { 00:13:18.674 "rw_ios_per_sec": 0, 00:13:18.674 "rw_mbytes_per_sec": 0, 00:13:18.674 "r_mbytes_per_sec": 0, 00:13:18.674 "w_mbytes_per_sec": 0 00:13:18.674 }, 00:13:18.674 "claimed": true, 00:13:18.674 "claim_type": "exclusive_write", 00:13:18.674 "zoned": false, 00:13:18.674 "supported_io_types": { 00:13:18.674 "read": true, 00:13:18.674 "write": true, 00:13:18.674 "unmap": true, 00:13:18.674 "flush": true, 00:13:18.674 "reset": true, 00:13:18.674 "nvme_admin": false, 00:13:18.674 "nvme_io": false, 00:13:18.674 "nvme_io_md": false, 00:13:18.674 "write_zeroes": true, 00:13:18.674 "zcopy": true, 00:13:18.674 "get_zone_info": false, 00:13:18.674 "zone_management": false, 00:13:18.674 "zone_append": false, 00:13:18.674 "compare": false, 00:13:18.674 "compare_and_write": false, 00:13:18.674 "abort": true, 00:13:18.674 "seek_hole": false, 00:13:18.674 "seek_data": false, 00:13:18.674 "copy": true, 00:13:18.674 "nvme_iov_md": false 00:13:18.674 }, 00:13:18.674 "memory_domains": [ 00:13:18.674 { 00:13:18.674 "dma_device_id": "system", 00:13:18.674 "dma_device_type": 1 00:13:18.674 }, 00:13:18.674 { 00:13:18.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.674 "dma_device_type": 2 00:13:18.674 } 00:13:18.674 ], 00:13:18.674 "driver_specific": {} 00:13:18.674 } 00:13:18.674 ] 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.674 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.675 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.934 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.934 "name": "Existed_Raid", 00:13:18.934 "uuid": "531b15c3-ae96-4c07-9e04-fbba9b4ca389", 00:13:18.934 "strip_size_kb": 64, 00:13:18.934 "state": "online", 00:13:18.934 "raid_level": "raid5f", 00:13:18.934 "superblock": false, 00:13:18.934 "num_base_bdevs": 3, 00:13:18.934 "num_base_bdevs_discovered": 3, 00:13:18.934 "num_base_bdevs_operational": 3, 00:13:18.934 "base_bdevs_list": [ 00:13:18.934 { 00:13:18.934 "name": "BaseBdev1", 00:13:18.934 "uuid": "672bb1b9-caa4-441a-8114-4c95508a5d47", 00:13:18.934 "is_configured": true, 00:13:18.934 "data_offset": 0, 00:13:18.934 "data_size": 65536 00:13:18.934 }, 00:13:18.934 { 00:13:18.934 "name": "BaseBdev2", 00:13:18.934 "uuid": "84284da7-7d9d-481e-938d-1453dfe58ef1", 00:13:18.934 "is_configured": true, 00:13:18.934 "data_offset": 0, 00:13:18.934 "data_size": 65536 00:13:18.934 }, 00:13:18.934 { 00:13:18.934 "name": "BaseBdev3", 00:13:18.934 "uuid": "01717877-bcf3-42ad-b640-ee064579a7fb", 00:13:18.934 "is_configured": true, 00:13:18.934 "data_offset": 0, 00:13:18.934 "data_size": 65536 00:13:18.934 } 00:13:18.934 ] 00:13:18.934 }' 00:13:18.934 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.934 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.193 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:19.193 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:19.194 [2024-11-26 12:56:36.792765] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:19.194 "name": "Existed_Raid", 00:13:19.194 "aliases": [ 00:13:19.194 "531b15c3-ae96-4c07-9e04-fbba9b4ca389" 00:13:19.194 ], 00:13:19.194 "product_name": "Raid Volume", 00:13:19.194 "block_size": 512, 00:13:19.194 "num_blocks": 131072, 00:13:19.194 "uuid": "531b15c3-ae96-4c07-9e04-fbba9b4ca389", 00:13:19.194 "assigned_rate_limits": { 00:13:19.194 "rw_ios_per_sec": 0, 00:13:19.194 "rw_mbytes_per_sec": 0, 00:13:19.194 "r_mbytes_per_sec": 0, 00:13:19.194 "w_mbytes_per_sec": 0 00:13:19.194 }, 00:13:19.194 "claimed": false, 00:13:19.194 "zoned": false, 00:13:19.194 "supported_io_types": { 00:13:19.194 "read": true, 00:13:19.194 "write": true, 00:13:19.194 "unmap": false, 00:13:19.194 "flush": false, 00:13:19.194 "reset": true, 00:13:19.194 "nvme_admin": false, 00:13:19.194 "nvme_io": false, 00:13:19.194 "nvme_io_md": false, 00:13:19.194 "write_zeroes": true, 00:13:19.194 "zcopy": false, 00:13:19.194 "get_zone_info": false, 00:13:19.194 "zone_management": false, 00:13:19.194 "zone_append": false, 00:13:19.194 "compare": false, 00:13:19.194 "compare_and_write": false, 00:13:19.194 "abort": false, 00:13:19.194 "seek_hole": false, 00:13:19.194 "seek_data": false, 00:13:19.194 "copy": false, 00:13:19.194 "nvme_iov_md": false 00:13:19.194 }, 00:13:19.194 "driver_specific": { 00:13:19.194 "raid": { 00:13:19.194 "uuid": "531b15c3-ae96-4c07-9e04-fbba9b4ca389", 00:13:19.194 "strip_size_kb": 64, 00:13:19.194 "state": "online", 00:13:19.194 "raid_level": "raid5f", 00:13:19.194 "superblock": false, 00:13:19.194 "num_base_bdevs": 3, 00:13:19.194 "num_base_bdevs_discovered": 3, 00:13:19.194 "num_base_bdevs_operational": 3, 00:13:19.194 "base_bdevs_list": [ 00:13:19.194 { 00:13:19.194 "name": "BaseBdev1", 00:13:19.194 "uuid": "672bb1b9-caa4-441a-8114-4c95508a5d47", 00:13:19.194 "is_configured": true, 00:13:19.194 "data_offset": 0, 00:13:19.194 "data_size": 65536 00:13:19.194 }, 00:13:19.194 { 00:13:19.194 "name": "BaseBdev2", 00:13:19.194 "uuid": "84284da7-7d9d-481e-938d-1453dfe58ef1", 00:13:19.194 "is_configured": true, 00:13:19.194 "data_offset": 0, 00:13:19.194 "data_size": 65536 00:13:19.194 }, 00:13:19.194 { 00:13:19.194 "name": "BaseBdev3", 00:13:19.194 "uuid": "01717877-bcf3-42ad-b640-ee064579a7fb", 00:13:19.194 "is_configured": true, 00:13:19.194 "data_offset": 0, 00:13:19.194 "data_size": 65536 00:13:19.194 } 00:13:19.194 ] 00:13:19.194 } 00:13:19.194 } 00:13:19.194 }' 00:13:19.194 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:19.454 BaseBdev2 00:13:19.454 BaseBdev3' 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 12:56:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 [2024-11-26 12:56:37.064215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.454 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.455 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.455 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.455 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.455 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.455 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.455 "name": "Existed_Raid", 00:13:19.455 "uuid": "531b15c3-ae96-4c07-9e04-fbba9b4ca389", 00:13:19.455 "strip_size_kb": 64, 00:13:19.455 "state": "online", 00:13:19.455 "raid_level": "raid5f", 00:13:19.455 "superblock": false, 00:13:19.455 "num_base_bdevs": 3, 00:13:19.455 "num_base_bdevs_discovered": 2, 00:13:19.455 "num_base_bdevs_operational": 2, 00:13:19.455 "base_bdevs_list": [ 00:13:19.455 { 00:13:19.455 "name": null, 00:13:19.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.455 "is_configured": false, 00:13:19.455 "data_offset": 0, 00:13:19.455 "data_size": 65536 00:13:19.455 }, 00:13:19.455 { 00:13:19.455 "name": "BaseBdev2", 00:13:19.455 "uuid": "84284da7-7d9d-481e-938d-1453dfe58ef1", 00:13:19.455 "is_configured": true, 00:13:19.455 "data_offset": 0, 00:13:19.455 "data_size": 65536 00:13:19.455 }, 00:13:19.455 { 00:13:19.455 "name": "BaseBdev3", 00:13:19.455 "uuid": "01717877-bcf3-42ad-b640-ee064579a7fb", 00:13:19.455 "is_configured": true, 00:13:19.455 "data_offset": 0, 00:13:19.455 "data_size": 65536 00:13:19.455 } 00:13:19.455 ] 00:13:19.455 }' 00:13:19.455 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.455 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.025 [2024-11-26 12:56:37.586802] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:20.025 [2024-11-26 12:56:37.586885] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.025 [2024-11-26 12:56:37.597832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.025 [2024-11-26 12:56:37.657772] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:20.025 [2024-11-26 12:56:37.657858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.025 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.286 BaseBdev2 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.286 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.286 [ 00:13:20.286 { 00:13:20.286 "name": "BaseBdev2", 00:13:20.286 "aliases": [ 00:13:20.286 "c866ce93-4219-4e89-8f5c-620b9fa7a191" 00:13:20.286 ], 00:13:20.286 "product_name": "Malloc disk", 00:13:20.286 "block_size": 512, 00:13:20.286 "num_blocks": 65536, 00:13:20.286 "uuid": "c866ce93-4219-4e89-8f5c-620b9fa7a191", 00:13:20.286 "assigned_rate_limits": { 00:13:20.286 "rw_ios_per_sec": 0, 00:13:20.286 "rw_mbytes_per_sec": 0, 00:13:20.286 "r_mbytes_per_sec": 0, 00:13:20.286 "w_mbytes_per_sec": 0 00:13:20.286 }, 00:13:20.286 "claimed": false, 00:13:20.286 "zoned": false, 00:13:20.286 "supported_io_types": { 00:13:20.286 "read": true, 00:13:20.286 "write": true, 00:13:20.286 "unmap": true, 00:13:20.286 "flush": true, 00:13:20.286 "reset": true, 00:13:20.286 "nvme_admin": false, 00:13:20.286 "nvme_io": false, 00:13:20.286 "nvme_io_md": false, 00:13:20.287 "write_zeroes": true, 00:13:20.287 "zcopy": true, 00:13:20.287 "get_zone_info": false, 00:13:20.287 "zone_management": false, 00:13:20.287 "zone_append": false, 00:13:20.287 "compare": false, 00:13:20.287 "compare_and_write": false, 00:13:20.287 "abort": true, 00:13:20.287 "seek_hole": false, 00:13:20.287 "seek_data": false, 00:13:20.287 "copy": true, 00:13:20.287 "nvme_iov_md": false 00:13:20.287 }, 00:13:20.287 "memory_domains": [ 00:13:20.287 { 00:13:20.287 "dma_device_id": "system", 00:13:20.287 "dma_device_type": 1 00:13:20.287 }, 00:13:20.287 { 00:13:20.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.287 "dma_device_type": 2 00:13:20.287 } 00:13:20.287 ], 00:13:20.287 "driver_specific": {} 00:13:20.287 } 00:13:20.287 ] 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.287 BaseBdev3 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.287 [ 00:13:20.287 { 00:13:20.287 "name": "BaseBdev3", 00:13:20.287 "aliases": [ 00:13:20.287 "67587c17-d992-4f99-8fc4-b02a3f16740b" 00:13:20.287 ], 00:13:20.287 "product_name": "Malloc disk", 00:13:20.287 "block_size": 512, 00:13:20.287 "num_blocks": 65536, 00:13:20.287 "uuid": "67587c17-d992-4f99-8fc4-b02a3f16740b", 00:13:20.287 "assigned_rate_limits": { 00:13:20.287 "rw_ios_per_sec": 0, 00:13:20.287 "rw_mbytes_per_sec": 0, 00:13:20.287 "r_mbytes_per_sec": 0, 00:13:20.287 "w_mbytes_per_sec": 0 00:13:20.287 }, 00:13:20.287 "claimed": false, 00:13:20.287 "zoned": false, 00:13:20.287 "supported_io_types": { 00:13:20.287 "read": true, 00:13:20.287 "write": true, 00:13:20.287 "unmap": true, 00:13:20.287 "flush": true, 00:13:20.287 "reset": true, 00:13:20.287 "nvme_admin": false, 00:13:20.287 "nvme_io": false, 00:13:20.287 "nvme_io_md": false, 00:13:20.287 "write_zeroes": true, 00:13:20.287 "zcopy": true, 00:13:20.287 "get_zone_info": false, 00:13:20.287 "zone_management": false, 00:13:20.287 "zone_append": false, 00:13:20.287 "compare": false, 00:13:20.287 "compare_and_write": false, 00:13:20.287 "abort": true, 00:13:20.287 "seek_hole": false, 00:13:20.287 "seek_data": false, 00:13:20.287 "copy": true, 00:13:20.287 "nvme_iov_md": false 00:13:20.287 }, 00:13:20.287 "memory_domains": [ 00:13:20.287 { 00:13:20.287 "dma_device_id": "system", 00:13:20.287 "dma_device_type": 1 00:13:20.287 }, 00:13:20.287 { 00:13:20.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.287 "dma_device_type": 2 00:13:20.287 } 00:13:20.287 ], 00:13:20.287 "driver_specific": {} 00:13:20.287 } 00:13:20.287 ] 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.287 [2024-11-26 12:56:37.832165] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:20.287 [2024-11-26 12:56:37.832265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:20.287 [2024-11-26 12:56:37.832292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.287 [2024-11-26 12:56:37.834055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.287 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.287 "name": "Existed_Raid", 00:13:20.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.287 "strip_size_kb": 64, 00:13:20.287 "state": "configuring", 00:13:20.287 "raid_level": "raid5f", 00:13:20.287 "superblock": false, 00:13:20.287 "num_base_bdevs": 3, 00:13:20.287 "num_base_bdevs_discovered": 2, 00:13:20.287 "num_base_bdevs_operational": 3, 00:13:20.287 "base_bdevs_list": [ 00:13:20.287 { 00:13:20.287 "name": "BaseBdev1", 00:13:20.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.287 "is_configured": false, 00:13:20.288 "data_offset": 0, 00:13:20.288 "data_size": 0 00:13:20.288 }, 00:13:20.288 { 00:13:20.288 "name": "BaseBdev2", 00:13:20.288 "uuid": "c866ce93-4219-4e89-8f5c-620b9fa7a191", 00:13:20.288 "is_configured": true, 00:13:20.288 "data_offset": 0, 00:13:20.288 "data_size": 65536 00:13:20.288 }, 00:13:20.288 { 00:13:20.288 "name": "BaseBdev3", 00:13:20.288 "uuid": "67587c17-d992-4f99-8fc4-b02a3f16740b", 00:13:20.288 "is_configured": true, 00:13:20.288 "data_offset": 0, 00:13:20.288 "data_size": 65536 00:13:20.288 } 00:13:20.288 ] 00:13:20.288 }' 00:13:20.288 12:56:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.288 12:56:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.858 [2024-11-26 12:56:38.287405] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.858 "name": "Existed_Raid", 00:13:20.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.858 "strip_size_kb": 64, 00:13:20.858 "state": "configuring", 00:13:20.858 "raid_level": "raid5f", 00:13:20.858 "superblock": false, 00:13:20.858 "num_base_bdevs": 3, 00:13:20.858 "num_base_bdevs_discovered": 1, 00:13:20.858 "num_base_bdevs_operational": 3, 00:13:20.858 "base_bdevs_list": [ 00:13:20.858 { 00:13:20.858 "name": "BaseBdev1", 00:13:20.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.858 "is_configured": false, 00:13:20.858 "data_offset": 0, 00:13:20.858 "data_size": 0 00:13:20.858 }, 00:13:20.858 { 00:13:20.858 "name": null, 00:13:20.858 "uuid": "c866ce93-4219-4e89-8f5c-620b9fa7a191", 00:13:20.858 "is_configured": false, 00:13:20.858 "data_offset": 0, 00:13:20.858 "data_size": 65536 00:13:20.858 }, 00:13:20.858 { 00:13:20.858 "name": "BaseBdev3", 00:13:20.858 "uuid": "67587c17-d992-4f99-8fc4-b02a3f16740b", 00:13:20.858 "is_configured": true, 00:13:20.858 "data_offset": 0, 00:13:20.858 "data_size": 65536 00:13:20.858 } 00:13:20.858 ] 00:13:20.858 }' 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.858 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.118 [2024-11-26 12:56:38.773766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.118 BaseBdev1 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.118 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.377 [ 00:13:21.377 { 00:13:21.377 "name": "BaseBdev1", 00:13:21.377 "aliases": [ 00:13:21.377 "9a560e0b-c64a-425a-8e1b-45b4a72066a6" 00:13:21.377 ], 00:13:21.377 "product_name": "Malloc disk", 00:13:21.377 "block_size": 512, 00:13:21.377 "num_blocks": 65536, 00:13:21.377 "uuid": "9a560e0b-c64a-425a-8e1b-45b4a72066a6", 00:13:21.377 "assigned_rate_limits": { 00:13:21.377 "rw_ios_per_sec": 0, 00:13:21.377 "rw_mbytes_per_sec": 0, 00:13:21.377 "r_mbytes_per_sec": 0, 00:13:21.377 "w_mbytes_per_sec": 0 00:13:21.377 }, 00:13:21.377 "claimed": true, 00:13:21.377 "claim_type": "exclusive_write", 00:13:21.377 "zoned": false, 00:13:21.377 "supported_io_types": { 00:13:21.377 "read": true, 00:13:21.377 "write": true, 00:13:21.377 "unmap": true, 00:13:21.377 "flush": true, 00:13:21.377 "reset": true, 00:13:21.377 "nvme_admin": false, 00:13:21.377 "nvme_io": false, 00:13:21.377 "nvme_io_md": false, 00:13:21.377 "write_zeroes": true, 00:13:21.377 "zcopy": true, 00:13:21.377 "get_zone_info": false, 00:13:21.377 "zone_management": false, 00:13:21.377 "zone_append": false, 00:13:21.377 "compare": false, 00:13:21.377 "compare_and_write": false, 00:13:21.377 "abort": true, 00:13:21.377 "seek_hole": false, 00:13:21.377 "seek_data": false, 00:13:21.377 "copy": true, 00:13:21.377 "nvme_iov_md": false 00:13:21.377 }, 00:13:21.377 "memory_domains": [ 00:13:21.377 { 00:13:21.377 "dma_device_id": "system", 00:13:21.377 "dma_device_type": 1 00:13:21.377 }, 00:13:21.377 { 00:13:21.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.377 "dma_device_type": 2 00:13:21.377 } 00:13:21.377 ], 00:13:21.377 "driver_specific": {} 00:13:21.377 } 00:13:21.377 ] 00:13:21.377 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.377 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:21.377 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.377 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.377 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.378 "name": "Existed_Raid", 00:13:21.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.378 "strip_size_kb": 64, 00:13:21.378 "state": "configuring", 00:13:21.378 "raid_level": "raid5f", 00:13:21.378 "superblock": false, 00:13:21.378 "num_base_bdevs": 3, 00:13:21.378 "num_base_bdevs_discovered": 2, 00:13:21.378 "num_base_bdevs_operational": 3, 00:13:21.378 "base_bdevs_list": [ 00:13:21.378 { 00:13:21.378 "name": "BaseBdev1", 00:13:21.378 "uuid": "9a560e0b-c64a-425a-8e1b-45b4a72066a6", 00:13:21.378 "is_configured": true, 00:13:21.378 "data_offset": 0, 00:13:21.378 "data_size": 65536 00:13:21.378 }, 00:13:21.378 { 00:13:21.378 "name": null, 00:13:21.378 "uuid": "c866ce93-4219-4e89-8f5c-620b9fa7a191", 00:13:21.378 "is_configured": false, 00:13:21.378 "data_offset": 0, 00:13:21.378 "data_size": 65536 00:13:21.378 }, 00:13:21.378 { 00:13:21.378 "name": "BaseBdev3", 00:13:21.378 "uuid": "67587c17-d992-4f99-8fc4-b02a3f16740b", 00:13:21.378 "is_configured": true, 00:13:21.378 "data_offset": 0, 00:13:21.378 "data_size": 65536 00:13:21.378 } 00:13:21.378 ] 00:13:21.378 }' 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.378 12:56:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.645 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.645 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.645 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.645 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:21.645 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.911 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:21.911 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:21.911 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.911 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.911 [2024-11-26 12:56:39.320870] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:21.911 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.911 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.912 "name": "Existed_Raid", 00:13:21.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.912 "strip_size_kb": 64, 00:13:21.912 "state": "configuring", 00:13:21.912 "raid_level": "raid5f", 00:13:21.912 "superblock": false, 00:13:21.912 "num_base_bdevs": 3, 00:13:21.912 "num_base_bdevs_discovered": 1, 00:13:21.912 "num_base_bdevs_operational": 3, 00:13:21.912 "base_bdevs_list": [ 00:13:21.912 { 00:13:21.912 "name": "BaseBdev1", 00:13:21.912 "uuid": "9a560e0b-c64a-425a-8e1b-45b4a72066a6", 00:13:21.912 "is_configured": true, 00:13:21.912 "data_offset": 0, 00:13:21.912 "data_size": 65536 00:13:21.912 }, 00:13:21.912 { 00:13:21.912 "name": null, 00:13:21.912 "uuid": "c866ce93-4219-4e89-8f5c-620b9fa7a191", 00:13:21.912 "is_configured": false, 00:13:21.912 "data_offset": 0, 00:13:21.912 "data_size": 65536 00:13:21.912 }, 00:13:21.912 { 00:13:21.912 "name": null, 00:13:21.912 "uuid": "67587c17-d992-4f99-8fc4-b02a3f16740b", 00:13:21.912 "is_configured": false, 00:13:21.912 "data_offset": 0, 00:13:21.912 "data_size": 65536 00:13:21.912 } 00:13:21.912 ] 00:13:21.912 }' 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.912 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.171 [2024-11-26 12:56:39.812053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.171 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.430 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.430 "name": "Existed_Raid", 00:13:22.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.430 "strip_size_kb": 64, 00:13:22.430 "state": "configuring", 00:13:22.430 "raid_level": "raid5f", 00:13:22.430 "superblock": false, 00:13:22.430 "num_base_bdevs": 3, 00:13:22.430 "num_base_bdevs_discovered": 2, 00:13:22.430 "num_base_bdevs_operational": 3, 00:13:22.430 "base_bdevs_list": [ 00:13:22.430 { 00:13:22.430 "name": "BaseBdev1", 00:13:22.430 "uuid": "9a560e0b-c64a-425a-8e1b-45b4a72066a6", 00:13:22.430 "is_configured": true, 00:13:22.430 "data_offset": 0, 00:13:22.430 "data_size": 65536 00:13:22.430 }, 00:13:22.430 { 00:13:22.430 "name": null, 00:13:22.430 "uuid": "c866ce93-4219-4e89-8f5c-620b9fa7a191", 00:13:22.430 "is_configured": false, 00:13:22.430 "data_offset": 0, 00:13:22.430 "data_size": 65536 00:13:22.430 }, 00:13:22.430 { 00:13:22.430 "name": "BaseBdev3", 00:13:22.430 "uuid": "67587c17-d992-4f99-8fc4-b02a3f16740b", 00:13:22.430 "is_configured": true, 00:13:22.430 "data_offset": 0, 00:13:22.430 "data_size": 65536 00:13:22.430 } 00:13:22.430 ] 00:13:22.430 }' 00:13:22.430 12:56:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.430 12:56:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.688 [2024-11-26 12:56:40.291329] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.688 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.688 "name": "Existed_Raid", 00:13:22.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.688 "strip_size_kb": 64, 00:13:22.688 "state": "configuring", 00:13:22.688 "raid_level": "raid5f", 00:13:22.688 "superblock": false, 00:13:22.688 "num_base_bdevs": 3, 00:13:22.688 "num_base_bdevs_discovered": 1, 00:13:22.689 "num_base_bdevs_operational": 3, 00:13:22.689 "base_bdevs_list": [ 00:13:22.689 { 00:13:22.689 "name": null, 00:13:22.689 "uuid": "9a560e0b-c64a-425a-8e1b-45b4a72066a6", 00:13:22.689 "is_configured": false, 00:13:22.689 "data_offset": 0, 00:13:22.689 "data_size": 65536 00:13:22.689 }, 00:13:22.689 { 00:13:22.689 "name": null, 00:13:22.689 "uuid": "c866ce93-4219-4e89-8f5c-620b9fa7a191", 00:13:22.689 "is_configured": false, 00:13:22.689 "data_offset": 0, 00:13:22.689 "data_size": 65536 00:13:22.689 }, 00:13:22.689 { 00:13:22.689 "name": "BaseBdev3", 00:13:22.689 "uuid": "67587c17-d992-4f99-8fc4-b02a3f16740b", 00:13:22.689 "is_configured": true, 00:13:22.689 "data_offset": 0, 00:13:22.689 "data_size": 65536 00:13:22.689 } 00:13:22.689 ] 00:13:22.689 }' 00:13:22.689 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.689 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.256 [2024-11-26 12:56:40.789067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.256 "name": "Existed_Raid", 00:13:23.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.256 "strip_size_kb": 64, 00:13:23.256 "state": "configuring", 00:13:23.256 "raid_level": "raid5f", 00:13:23.256 "superblock": false, 00:13:23.256 "num_base_bdevs": 3, 00:13:23.256 "num_base_bdevs_discovered": 2, 00:13:23.256 "num_base_bdevs_operational": 3, 00:13:23.256 "base_bdevs_list": [ 00:13:23.256 { 00:13:23.256 "name": null, 00:13:23.256 "uuid": "9a560e0b-c64a-425a-8e1b-45b4a72066a6", 00:13:23.256 "is_configured": false, 00:13:23.256 "data_offset": 0, 00:13:23.256 "data_size": 65536 00:13:23.256 }, 00:13:23.256 { 00:13:23.256 "name": "BaseBdev2", 00:13:23.256 "uuid": "c866ce93-4219-4e89-8f5c-620b9fa7a191", 00:13:23.256 "is_configured": true, 00:13:23.256 "data_offset": 0, 00:13:23.256 "data_size": 65536 00:13:23.256 }, 00:13:23.256 { 00:13:23.256 "name": "BaseBdev3", 00:13:23.256 "uuid": "67587c17-d992-4f99-8fc4-b02a3f16740b", 00:13:23.256 "is_configured": true, 00:13:23.256 "data_offset": 0, 00:13:23.256 "data_size": 65536 00:13:23.256 } 00:13:23.256 ] 00:13:23.256 }' 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.256 12:56:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9a560e0b-c64a-425a-8e1b-45b4a72066a6 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.824 [2024-11-26 12:56:41.326997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:23.824 [2024-11-26 12:56:41.327097] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:23.824 [2024-11-26 12:56:41.327124] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:23.824 [2024-11-26 12:56:41.327415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:23.824 [2024-11-26 12:56:41.327882] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:23.824 [2024-11-26 12:56:41.327930] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:23.824 [2024-11-26 12:56:41.328139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.824 NewBaseBdev 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.824 [ 00:13:23.824 { 00:13:23.824 "name": "NewBaseBdev", 00:13:23.824 "aliases": [ 00:13:23.824 "9a560e0b-c64a-425a-8e1b-45b4a72066a6" 00:13:23.824 ], 00:13:23.824 "product_name": "Malloc disk", 00:13:23.824 "block_size": 512, 00:13:23.824 "num_blocks": 65536, 00:13:23.824 "uuid": "9a560e0b-c64a-425a-8e1b-45b4a72066a6", 00:13:23.824 "assigned_rate_limits": { 00:13:23.824 "rw_ios_per_sec": 0, 00:13:23.824 "rw_mbytes_per_sec": 0, 00:13:23.824 "r_mbytes_per_sec": 0, 00:13:23.824 "w_mbytes_per_sec": 0 00:13:23.824 }, 00:13:23.824 "claimed": true, 00:13:23.824 "claim_type": "exclusive_write", 00:13:23.824 "zoned": false, 00:13:23.824 "supported_io_types": { 00:13:23.824 "read": true, 00:13:23.824 "write": true, 00:13:23.824 "unmap": true, 00:13:23.824 "flush": true, 00:13:23.824 "reset": true, 00:13:23.824 "nvme_admin": false, 00:13:23.824 "nvme_io": false, 00:13:23.824 "nvme_io_md": false, 00:13:23.824 "write_zeroes": true, 00:13:23.824 "zcopy": true, 00:13:23.824 "get_zone_info": false, 00:13:23.824 "zone_management": false, 00:13:23.824 "zone_append": false, 00:13:23.824 "compare": false, 00:13:23.824 "compare_and_write": false, 00:13:23.824 "abort": true, 00:13:23.824 "seek_hole": false, 00:13:23.824 "seek_data": false, 00:13:23.824 "copy": true, 00:13:23.824 "nvme_iov_md": false 00:13:23.824 }, 00:13:23.824 "memory_domains": [ 00:13:23.824 { 00:13:23.824 "dma_device_id": "system", 00:13:23.824 "dma_device_type": 1 00:13:23.824 }, 00:13:23.824 { 00:13:23.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.824 "dma_device_type": 2 00:13:23.824 } 00:13:23.824 ], 00:13:23.824 "driver_specific": {} 00:13:23.824 } 00:13:23.824 ] 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.824 "name": "Existed_Raid", 00:13:23.824 "uuid": "1916b94c-9040-46a6-8b20-9666801b045d", 00:13:23.824 "strip_size_kb": 64, 00:13:23.824 "state": "online", 00:13:23.824 "raid_level": "raid5f", 00:13:23.824 "superblock": false, 00:13:23.824 "num_base_bdevs": 3, 00:13:23.824 "num_base_bdevs_discovered": 3, 00:13:23.824 "num_base_bdevs_operational": 3, 00:13:23.824 "base_bdevs_list": [ 00:13:23.824 { 00:13:23.824 "name": "NewBaseBdev", 00:13:23.824 "uuid": "9a560e0b-c64a-425a-8e1b-45b4a72066a6", 00:13:23.824 "is_configured": true, 00:13:23.824 "data_offset": 0, 00:13:23.824 "data_size": 65536 00:13:23.824 }, 00:13:23.824 { 00:13:23.824 "name": "BaseBdev2", 00:13:23.824 "uuid": "c866ce93-4219-4e89-8f5c-620b9fa7a191", 00:13:23.824 "is_configured": true, 00:13:23.824 "data_offset": 0, 00:13:23.824 "data_size": 65536 00:13:23.824 }, 00:13:23.824 { 00:13:23.824 "name": "BaseBdev3", 00:13:23.824 "uuid": "67587c17-d992-4f99-8fc4-b02a3f16740b", 00:13:23.824 "is_configured": true, 00:13:23.824 "data_offset": 0, 00:13:23.824 "data_size": 65536 00:13:23.824 } 00:13:23.824 ] 00:13:23.824 }' 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.824 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.392 [2024-11-26 12:56:41.814364] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.392 "name": "Existed_Raid", 00:13:24.392 "aliases": [ 00:13:24.392 "1916b94c-9040-46a6-8b20-9666801b045d" 00:13:24.392 ], 00:13:24.392 "product_name": "Raid Volume", 00:13:24.392 "block_size": 512, 00:13:24.392 "num_blocks": 131072, 00:13:24.392 "uuid": "1916b94c-9040-46a6-8b20-9666801b045d", 00:13:24.392 "assigned_rate_limits": { 00:13:24.392 "rw_ios_per_sec": 0, 00:13:24.392 "rw_mbytes_per_sec": 0, 00:13:24.392 "r_mbytes_per_sec": 0, 00:13:24.392 "w_mbytes_per_sec": 0 00:13:24.392 }, 00:13:24.392 "claimed": false, 00:13:24.392 "zoned": false, 00:13:24.392 "supported_io_types": { 00:13:24.392 "read": true, 00:13:24.392 "write": true, 00:13:24.392 "unmap": false, 00:13:24.392 "flush": false, 00:13:24.392 "reset": true, 00:13:24.392 "nvme_admin": false, 00:13:24.392 "nvme_io": false, 00:13:24.392 "nvme_io_md": false, 00:13:24.392 "write_zeroes": true, 00:13:24.392 "zcopy": false, 00:13:24.392 "get_zone_info": false, 00:13:24.392 "zone_management": false, 00:13:24.392 "zone_append": false, 00:13:24.392 "compare": false, 00:13:24.392 "compare_and_write": false, 00:13:24.392 "abort": false, 00:13:24.392 "seek_hole": false, 00:13:24.392 "seek_data": false, 00:13:24.392 "copy": false, 00:13:24.392 "nvme_iov_md": false 00:13:24.392 }, 00:13:24.392 "driver_specific": { 00:13:24.392 "raid": { 00:13:24.392 "uuid": "1916b94c-9040-46a6-8b20-9666801b045d", 00:13:24.392 "strip_size_kb": 64, 00:13:24.392 "state": "online", 00:13:24.392 "raid_level": "raid5f", 00:13:24.392 "superblock": false, 00:13:24.392 "num_base_bdevs": 3, 00:13:24.392 "num_base_bdevs_discovered": 3, 00:13:24.392 "num_base_bdevs_operational": 3, 00:13:24.392 "base_bdevs_list": [ 00:13:24.392 { 00:13:24.392 "name": "NewBaseBdev", 00:13:24.392 "uuid": "9a560e0b-c64a-425a-8e1b-45b4a72066a6", 00:13:24.392 "is_configured": true, 00:13:24.392 "data_offset": 0, 00:13:24.392 "data_size": 65536 00:13:24.392 }, 00:13:24.392 { 00:13:24.392 "name": "BaseBdev2", 00:13:24.392 "uuid": "c866ce93-4219-4e89-8f5c-620b9fa7a191", 00:13:24.392 "is_configured": true, 00:13:24.392 "data_offset": 0, 00:13:24.392 "data_size": 65536 00:13:24.392 }, 00:13:24.392 { 00:13:24.392 "name": "BaseBdev3", 00:13:24.392 "uuid": "67587c17-d992-4f99-8fc4-b02a3f16740b", 00:13:24.392 "is_configured": true, 00:13:24.392 "data_offset": 0, 00:13:24.392 "data_size": 65536 00:13:24.392 } 00:13:24.392 ] 00:13:24.392 } 00:13:24.392 } 00:13:24.392 }' 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:24.392 BaseBdev2 00:13:24.392 BaseBdev3' 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.392 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.393 12:56:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.393 [2024-11-26 12:56:42.061758] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:24.393 [2024-11-26 12:56:42.061783] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.393 [2024-11-26 12:56:42.061843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.393 [2024-11-26 12:56:42.062061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.393 [2024-11-26 12:56:42.062072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90645 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90645 ']' 00:13:24.393 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90645 00:13:24.653 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:24.653 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:24.653 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90645 00:13:24.653 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.653 killing process with pid 90645 00:13:24.653 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.653 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90645' 00:13:24.653 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90645 00:13:24.653 [2024-11-26 12:56:42.112579] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:24.653 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90645 00:13:24.653 [2024-11-26 12:56:42.143030] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.913 ************************************ 00:13:24.913 END TEST raid5f_state_function_test 00:13:24.913 ************************************ 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:24.913 00:13:24.913 real 0m8.987s 00:13:24.913 user 0m15.236s 00:13:24.913 sys 0m2.004s 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.913 12:56:42 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:24.913 12:56:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:24.913 12:56:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.913 12:56:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.913 ************************************ 00:13:24.913 START TEST raid5f_state_function_test_sb 00:13:24.913 ************************************ 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91250 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91250' 00:13:24.913 Process raid pid: 91250 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91250 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91250 ']' 00:13:24.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.913 12:56:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.914 12:56:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.914 12:56:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.914 12:56:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.914 [2024-11-26 12:56:42.574428] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:24.914 [2024-11-26 12:56:42.574560] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.174 [2024-11-26 12:56:42.737088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.174 [2024-11-26 12:56:42.784107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.174 [2024-11-26 12:56:42.827105] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.174 [2024-11-26 12:56:42.827141] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.745 [2024-11-26 12:56:43.404948] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.745 [2024-11-26 12:56:43.405102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.745 [2024-11-26 12:56:43.405125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:25.745 [2024-11-26 12:56:43.405135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:25.745 [2024-11-26 12:56:43.405141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:25.745 [2024-11-26 12:56:43.405155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.745 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.005 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.005 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.005 "name": "Existed_Raid", 00:13:26.005 "uuid": "f2281097-99e9-456d-ab5a-3f3a96dfbd90", 00:13:26.005 "strip_size_kb": 64, 00:13:26.005 "state": "configuring", 00:13:26.005 "raid_level": "raid5f", 00:13:26.005 "superblock": true, 00:13:26.005 "num_base_bdevs": 3, 00:13:26.005 "num_base_bdevs_discovered": 0, 00:13:26.005 "num_base_bdevs_operational": 3, 00:13:26.005 "base_bdevs_list": [ 00:13:26.005 { 00:13:26.005 "name": "BaseBdev1", 00:13:26.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.005 "is_configured": false, 00:13:26.005 "data_offset": 0, 00:13:26.005 "data_size": 0 00:13:26.005 }, 00:13:26.005 { 00:13:26.005 "name": "BaseBdev2", 00:13:26.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.005 "is_configured": false, 00:13:26.005 "data_offset": 0, 00:13:26.005 "data_size": 0 00:13:26.005 }, 00:13:26.005 { 00:13:26.005 "name": "BaseBdev3", 00:13:26.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.005 "is_configured": false, 00:13:26.005 "data_offset": 0, 00:13:26.005 "data_size": 0 00:13:26.005 } 00:13:26.005 ] 00:13:26.005 }' 00:13:26.005 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.005 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.264 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.264 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.264 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.264 [2024-11-26 12:56:43.868051] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.264 [2024-11-26 12:56:43.868149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.265 [2024-11-26 12:56:43.880062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.265 [2024-11-26 12:56:43.880144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.265 [2024-11-26 12:56:43.880168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.265 [2024-11-26 12:56:43.880198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.265 [2024-11-26 12:56:43.880215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.265 [2024-11-26 12:56:43.880234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.265 [2024-11-26 12:56:43.901012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.265 BaseBdev1 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.265 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.265 [ 00:13:26.265 { 00:13:26.265 "name": "BaseBdev1", 00:13:26.265 "aliases": [ 00:13:26.265 "8ef58782-1eff-468e-bb94-75604b3ee36e" 00:13:26.265 ], 00:13:26.265 "product_name": "Malloc disk", 00:13:26.265 "block_size": 512, 00:13:26.265 "num_blocks": 65536, 00:13:26.265 "uuid": "8ef58782-1eff-468e-bb94-75604b3ee36e", 00:13:26.265 "assigned_rate_limits": { 00:13:26.265 "rw_ios_per_sec": 0, 00:13:26.265 "rw_mbytes_per_sec": 0, 00:13:26.265 "r_mbytes_per_sec": 0, 00:13:26.265 "w_mbytes_per_sec": 0 00:13:26.265 }, 00:13:26.265 "claimed": true, 00:13:26.265 "claim_type": "exclusive_write", 00:13:26.265 "zoned": false, 00:13:26.265 "supported_io_types": { 00:13:26.265 "read": true, 00:13:26.265 "write": true, 00:13:26.265 "unmap": true, 00:13:26.265 "flush": true, 00:13:26.265 "reset": true, 00:13:26.265 "nvme_admin": false, 00:13:26.265 "nvme_io": false, 00:13:26.265 "nvme_io_md": false, 00:13:26.265 "write_zeroes": true, 00:13:26.265 "zcopy": true, 00:13:26.265 "get_zone_info": false, 00:13:26.265 "zone_management": false, 00:13:26.265 "zone_append": false, 00:13:26.265 "compare": false, 00:13:26.265 "compare_and_write": false, 00:13:26.265 "abort": true, 00:13:26.265 "seek_hole": false, 00:13:26.265 "seek_data": false, 00:13:26.265 "copy": true, 00:13:26.265 "nvme_iov_md": false 00:13:26.265 }, 00:13:26.265 "memory_domains": [ 00:13:26.265 { 00:13:26.265 "dma_device_id": "system", 00:13:26.265 "dma_device_type": 1 00:13:26.265 }, 00:13:26.265 { 00:13:26.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.265 "dma_device_type": 2 00:13:26.265 } 00:13:26.265 ], 00:13:26.265 "driver_specific": {} 00:13:26.524 } 00:13:26.524 ] 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.524 "name": "Existed_Raid", 00:13:26.524 "uuid": "72e93191-1547-491e-a9a1-9017572e140b", 00:13:26.524 "strip_size_kb": 64, 00:13:26.524 "state": "configuring", 00:13:26.524 "raid_level": "raid5f", 00:13:26.524 "superblock": true, 00:13:26.524 "num_base_bdevs": 3, 00:13:26.524 "num_base_bdevs_discovered": 1, 00:13:26.524 "num_base_bdevs_operational": 3, 00:13:26.524 "base_bdevs_list": [ 00:13:26.524 { 00:13:26.524 "name": "BaseBdev1", 00:13:26.524 "uuid": "8ef58782-1eff-468e-bb94-75604b3ee36e", 00:13:26.524 "is_configured": true, 00:13:26.524 "data_offset": 2048, 00:13:26.524 "data_size": 63488 00:13:26.524 }, 00:13:26.524 { 00:13:26.524 "name": "BaseBdev2", 00:13:26.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.524 "is_configured": false, 00:13:26.524 "data_offset": 0, 00:13:26.524 "data_size": 0 00:13:26.524 }, 00:13:26.524 { 00:13:26.524 "name": "BaseBdev3", 00:13:26.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.524 "is_configured": false, 00:13:26.524 "data_offset": 0, 00:13:26.524 "data_size": 0 00:13:26.524 } 00:13:26.524 ] 00:13:26.524 }' 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.524 12:56:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.782 [2024-11-26 12:56:44.416161] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.782 [2024-11-26 12:56:44.416276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.782 [2024-11-26 12:56:44.428193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.782 [2024-11-26 12:56:44.430040] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.782 [2024-11-26 12:56:44.430113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.782 [2024-11-26 12:56:44.430143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.782 [2024-11-26 12:56:44.430167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.782 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.041 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.041 "name": "Existed_Raid", 00:13:27.041 "uuid": "9a6958fe-7a80-45f9-a021-40030e3eddca", 00:13:27.041 "strip_size_kb": 64, 00:13:27.041 "state": "configuring", 00:13:27.041 "raid_level": "raid5f", 00:13:27.041 "superblock": true, 00:13:27.041 "num_base_bdevs": 3, 00:13:27.041 "num_base_bdevs_discovered": 1, 00:13:27.041 "num_base_bdevs_operational": 3, 00:13:27.041 "base_bdevs_list": [ 00:13:27.041 { 00:13:27.041 "name": "BaseBdev1", 00:13:27.041 "uuid": "8ef58782-1eff-468e-bb94-75604b3ee36e", 00:13:27.041 "is_configured": true, 00:13:27.041 "data_offset": 2048, 00:13:27.041 "data_size": 63488 00:13:27.041 }, 00:13:27.041 { 00:13:27.041 "name": "BaseBdev2", 00:13:27.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.041 "is_configured": false, 00:13:27.041 "data_offset": 0, 00:13:27.041 "data_size": 0 00:13:27.041 }, 00:13:27.041 { 00:13:27.041 "name": "BaseBdev3", 00:13:27.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.041 "is_configured": false, 00:13:27.041 "data_offset": 0, 00:13:27.041 "data_size": 0 00:13:27.041 } 00:13:27.041 ] 00:13:27.041 }' 00:13:27.041 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.041 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.300 [2024-11-26 12:56:44.939047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.300 BaseBdev2 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.300 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.300 [ 00:13:27.300 { 00:13:27.300 "name": "BaseBdev2", 00:13:27.300 "aliases": [ 00:13:27.300 "73adf6fe-c842-4c6e-9a4f-d1a6585873a5" 00:13:27.300 ], 00:13:27.300 "product_name": "Malloc disk", 00:13:27.300 "block_size": 512, 00:13:27.300 "num_blocks": 65536, 00:13:27.300 "uuid": "73adf6fe-c842-4c6e-9a4f-d1a6585873a5", 00:13:27.300 "assigned_rate_limits": { 00:13:27.300 "rw_ios_per_sec": 0, 00:13:27.300 "rw_mbytes_per_sec": 0, 00:13:27.300 "r_mbytes_per_sec": 0, 00:13:27.300 "w_mbytes_per_sec": 0 00:13:27.300 }, 00:13:27.300 "claimed": true, 00:13:27.300 "claim_type": "exclusive_write", 00:13:27.300 "zoned": false, 00:13:27.300 "supported_io_types": { 00:13:27.300 "read": true, 00:13:27.300 "write": true, 00:13:27.300 "unmap": true, 00:13:27.300 "flush": true, 00:13:27.300 "reset": true, 00:13:27.300 "nvme_admin": false, 00:13:27.300 "nvme_io": false, 00:13:27.300 "nvme_io_md": false, 00:13:27.300 "write_zeroes": true, 00:13:27.300 "zcopy": true, 00:13:27.300 "get_zone_info": false, 00:13:27.300 "zone_management": false, 00:13:27.300 "zone_append": false, 00:13:27.300 "compare": false, 00:13:27.300 "compare_and_write": false, 00:13:27.300 "abort": true, 00:13:27.300 "seek_hole": false, 00:13:27.300 "seek_data": false, 00:13:27.300 "copy": true, 00:13:27.300 "nvme_iov_md": false 00:13:27.300 }, 00:13:27.300 "memory_domains": [ 00:13:27.300 { 00:13:27.300 "dma_device_id": "system", 00:13:27.300 "dma_device_type": 1 00:13:27.300 }, 00:13:27.300 { 00:13:27.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.300 "dma_device_type": 2 00:13:27.300 } 00:13:27.300 ], 00:13:27.300 "driver_specific": {} 00:13:27.558 } 00:13:27.558 ] 00:13:27.558 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.558 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:27.558 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:27.558 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.558 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:27.558 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.559 12:56:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.559 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.559 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.559 "name": "Existed_Raid", 00:13:27.559 "uuid": "9a6958fe-7a80-45f9-a021-40030e3eddca", 00:13:27.559 "strip_size_kb": 64, 00:13:27.559 "state": "configuring", 00:13:27.559 "raid_level": "raid5f", 00:13:27.559 "superblock": true, 00:13:27.559 "num_base_bdevs": 3, 00:13:27.559 "num_base_bdevs_discovered": 2, 00:13:27.559 "num_base_bdevs_operational": 3, 00:13:27.559 "base_bdevs_list": [ 00:13:27.559 { 00:13:27.559 "name": "BaseBdev1", 00:13:27.559 "uuid": "8ef58782-1eff-468e-bb94-75604b3ee36e", 00:13:27.559 "is_configured": true, 00:13:27.559 "data_offset": 2048, 00:13:27.559 "data_size": 63488 00:13:27.559 }, 00:13:27.559 { 00:13:27.559 "name": "BaseBdev2", 00:13:27.559 "uuid": "73adf6fe-c842-4c6e-9a4f-d1a6585873a5", 00:13:27.559 "is_configured": true, 00:13:27.559 "data_offset": 2048, 00:13:27.559 "data_size": 63488 00:13:27.559 }, 00:13:27.559 { 00:13:27.559 "name": "BaseBdev3", 00:13:27.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.559 "is_configured": false, 00:13:27.559 "data_offset": 0, 00:13:27.559 "data_size": 0 00:13:27.559 } 00:13:27.559 ] 00:13:27.559 }' 00:13:27.559 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.559 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.819 [2024-11-26 12:56:45.453180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.819 [2024-11-26 12:56:45.453388] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:27.819 [2024-11-26 12:56:45.453407] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:27.819 BaseBdev3 00:13:27.819 [2024-11-26 12:56:45.453685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:27.819 [2024-11-26 12:56:45.454100] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:27.819 [2024-11-26 12:56:45.454121] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:27.819 [2024-11-26 12:56:45.454346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.819 [ 00:13:27.819 { 00:13:27.819 "name": "BaseBdev3", 00:13:27.819 "aliases": [ 00:13:27.819 "66bfd804-6146-44ea-af0c-1b90649d64fc" 00:13:27.819 ], 00:13:27.819 "product_name": "Malloc disk", 00:13:27.819 "block_size": 512, 00:13:27.819 "num_blocks": 65536, 00:13:27.819 "uuid": "66bfd804-6146-44ea-af0c-1b90649d64fc", 00:13:27.819 "assigned_rate_limits": { 00:13:27.819 "rw_ios_per_sec": 0, 00:13:27.819 "rw_mbytes_per_sec": 0, 00:13:27.819 "r_mbytes_per_sec": 0, 00:13:27.819 "w_mbytes_per_sec": 0 00:13:27.819 }, 00:13:27.819 "claimed": true, 00:13:27.819 "claim_type": "exclusive_write", 00:13:27.819 "zoned": false, 00:13:27.819 "supported_io_types": { 00:13:27.819 "read": true, 00:13:27.819 "write": true, 00:13:27.819 "unmap": true, 00:13:27.819 "flush": true, 00:13:27.819 "reset": true, 00:13:27.819 "nvme_admin": false, 00:13:27.819 "nvme_io": false, 00:13:27.819 "nvme_io_md": false, 00:13:27.819 "write_zeroes": true, 00:13:27.819 "zcopy": true, 00:13:27.819 "get_zone_info": false, 00:13:27.819 "zone_management": false, 00:13:27.819 "zone_append": false, 00:13:27.819 "compare": false, 00:13:27.819 "compare_and_write": false, 00:13:27.819 "abort": true, 00:13:27.819 "seek_hole": false, 00:13:27.819 "seek_data": false, 00:13:27.819 "copy": true, 00:13:27.819 "nvme_iov_md": false 00:13:27.819 }, 00:13:27.819 "memory_domains": [ 00:13:27.819 { 00:13:27.819 "dma_device_id": "system", 00:13:27.819 "dma_device_type": 1 00:13:27.819 }, 00:13:27.819 { 00:13:27.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.819 "dma_device_type": 2 00:13:27.819 } 00:13:27.819 ], 00:13:27.819 "driver_specific": {} 00:13:27.819 } 00:13:27.819 ] 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.819 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.079 "name": "Existed_Raid", 00:13:28.079 "uuid": "9a6958fe-7a80-45f9-a021-40030e3eddca", 00:13:28.079 "strip_size_kb": 64, 00:13:28.079 "state": "online", 00:13:28.079 "raid_level": "raid5f", 00:13:28.079 "superblock": true, 00:13:28.079 "num_base_bdevs": 3, 00:13:28.079 "num_base_bdevs_discovered": 3, 00:13:28.079 "num_base_bdevs_operational": 3, 00:13:28.079 "base_bdevs_list": [ 00:13:28.079 { 00:13:28.079 "name": "BaseBdev1", 00:13:28.079 "uuid": "8ef58782-1eff-468e-bb94-75604b3ee36e", 00:13:28.079 "is_configured": true, 00:13:28.079 "data_offset": 2048, 00:13:28.079 "data_size": 63488 00:13:28.079 }, 00:13:28.079 { 00:13:28.079 "name": "BaseBdev2", 00:13:28.079 "uuid": "73adf6fe-c842-4c6e-9a4f-d1a6585873a5", 00:13:28.079 "is_configured": true, 00:13:28.079 "data_offset": 2048, 00:13:28.079 "data_size": 63488 00:13:28.079 }, 00:13:28.079 { 00:13:28.079 "name": "BaseBdev3", 00:13:28.079 "uuid": "66bfd804-6146-44ea-af0c-1b90649d64fc", 00:13:28.079 "is_configured": true, 00:13:28.079 "data_offset": 2048, 00:13:28.079 "data_size": 63488 00:13:28.079 } 00:13:28.079 ] 00:13:28.079 }' 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.079 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.338 [2024-11-26 12:56:45.964546] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.338 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:28.338 "name": "Existed_Raid", 00:13:28.338 "aliases": [ 00:13:28.338 "9a6958fe-7a80-45f9-a021-40030e3eddca" 00:13:28.338 ], 00:13:28.338 "product_name": "Raid Volume", 00:13:28.338 "block_size": 512, 00:13:28.338 "num_blocks": 126976, 00:13:28.338 "uuid": "9a6958fe-7a80-45f9-a021-40030e3eddca", 00:13:28.338 "assigned_rate_limits": { 00:13:28.338 "rw_ios_per_sec": 0, 00:13:28.338 "rw_mbytes_per_sec": 0, 00:13:28.338 "r_mbytes_per_sec": 0, 00:13:28.338 "w_mbytes_per_sec": 0 00:13:28.338 }, 00:13:28.338 "claimed": false, 00:13:28.338 "zoned": false, 00:13:28.338 "supported_io_types": { 00:13:28.338 "read": true, 00:13:28.338 "write": true, 00:13:28.338 "unmap": false, 00:13:28.338 "flush": false, 00:13:28.338 "reset": true, 00:13:28.338 "nvme_admin": false, 00:13:28.338 "nvme_io": false, 00:13:28.338 "nvme_io_md": false, 00:13:28.338 "write_zeroes": true, 00:13:28.338 "zcopy": false, 00:13:28.338 "get_zone_info": false, 00:13:28.338 "zone_management": false, 00:13:28.339 "zone_append": false, 00:13:28.339 "compare": false, 00:13:28.339 "compare_and_write": false, 00:13:28.339 "abort": false, 00:13:28.339 "seek_hole": false, 00:13:28.339 "seek_data": false, 00:13:28.339 "copy": false, 00:13:28.339 "nvme_iov_md": false 00:13:28.339 }, 00:13:28.339 "driver_specific": { 00:13:28.339 "raid": { 00:13:28.339 "uuid": "9a6958fe-7a80-45f9-a021-40030e3eddca", 00:13:28.339 "strip_size_kb": 64, 00:13:28.339 "state": "online", 00:13:28.339 "raid_level": "raid5f", 00:13:28.339 "superblock": true, 00:13:28.339 "num_base_bdevs": 3, 00:13:28.339 "num_base_bdevs_discovered": 3, 00:13:28.339 "num_base_bdevs_operational": 3, 00:13:28.339 "base_bdevs_list": [ 00:13:28.339 { 00:13:28.339 "name": "BaseBdev1", 00:13:28.339 "uuid": "8ef58782-1eff-468e-bb94-75604b3ee36e", 00:13:28.339 "is_configured": true, 00:13:28.339 "data_offset": 2048, 00:13:28.339 "data_size": 63488 00:13:28.339 }, 00:13:28.339 { 00:13:28.339 "name": "BaseBdev2", 00:13:28.339 "uuid": "73adf6fe-c842-4c6e-9a4f-d1a6585873a5", 00:13:28.339 "is_configured": true, 00:13:28.339 "data_offset": 2048, 00:13:28.339 "data_size": 63488 00:13:28.339 }, 00:13:28.339 { 00:13:28.339 "name": "BaseBdev3", 00:13:28.339 "uuid": "66bfd804-6146-44ea-af0c-1b90649d64fc", 00:13:28.339 "is_configured": true, 00:13:28.339 "data_offset": 2048, 00:13:28.339 "data_size": 63488 00:13:28.339 } 00:13:28.339 ] 00:13:28.339 } 00:13:28.339 } 00:13:28.339 }' 00:13:28.339 12:56:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:28.599 BaseBdev2 00:13:28.599 BaseBdev3' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 [2024-11-26 12:56:46.251917] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.599 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.858 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.858 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.858 "name": "Existed_Raid", 00:13:28.858 "uuid": "9a6958fe-7a80-45f9-a021-40030e3eddca", 00:13:28.858 "strip_size_kb": 64, 00:13:28.858 "state": "online", 00:13:28.858 "raid_level": "raid5f", 00:13:28.858 "superblock": true, 00:13:28.858 "num_base_bdevs": 3, 00:13:28.858 "num_base_bdevs_discovered": 2, 00:13:28.858 "num_base_bdevs_operational": 2, 00:13:28.858 "base_bdevs_list": [ 00:13:28.858 { 00:13:28.858 "name": null, 00:13:28.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.858 "is_configured": false, 00:13:28.858 "data_offset": 0, 00:13:28.858 "data_size": 63488 00:13:28.858 }, 00:13:28.858 { 00:13:28.858 "name": "BaseBdev2", 00:13:28.858 "uuid": "73adf6fe-c842-4c6e-9a4f-d1a6585873a5", 00:13:28.858 "is_configured": true, 00:13:28.858 "data_offset": 2048, 00:13:28.858 "data_size": 63488 00:13:28.858 }, 00:13:28.858 { 00:13:28.858 "name": "BaseBdev3", 00:13:28.858 "uuid": "66bfd804-6146-44ea-af0c-1b90649d64fc", 00:13:28.858 "is_configured": true, 00:13:28.858 "data_offset": 2048, 00:13:28.858 "data_size": 63488 00:13:28.858 } 00:13:28.858 ] 00:13:28.858 }' 00:13:28.858 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.858 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.119 [2024-11-26 12:56:46.762282] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.119 [2024-11-26 12:56:46.762423] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.119 [2024-11-26 12:56:46.773370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.119 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.381 [2024-11-26 12:56:46.829320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:29.381 [2024-11-26 12:56:46.829374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.381 BaseBdev2 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:29.381 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.382 [ 00:13:29.382 { 00:13:29.382 "name": "BaseBdev2", 00:13:29.382 "aliases": [ 00:13:29.382 "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab" 00:13:29.382 ], 00:13:29.382 "product_name": "Malloc disk", 00:13:29.382 "block_size": 512, 00:13:29.382 "num_blocks": 65536, 00:13:29.382 "uuid": "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab", 00:13:29.382 "assigned_rate_limits": { 00:13:29.382 "rw_ios_per_sec": 0, 00:13:29.382 "rw_mbytes_per_sec": 0, 00:13:29.382 "r_mbytes_per_sec": 0, 00:13:29.382 "w_mbytes_per_sec": 0 00:13:29.382 }, 00:13:29.382 "claimed": false, 00:13:29.382 "zoned": false, 00:13:29.382 "supported_io_types": { 00:13:29.382 "read": true, 00:13:29.382 "write": true, 00:13:29.382 "unmap": true, 00:13:29.382 "flush": true, 00:13:29.382 "reset": true, 00:13:29.382 "nvme_admin": false, 00:13:29.382 "nvme_io": false, 00:13:29.382 "nvme_io_md": false, 00:13:29.382 "write_zeroes": true, 00:13:29.382 "zcopy": true, 00:13:29.382 "get_zone_info": false, 00:13:29.382 "zone_management": false, 00:13:29.382 "zone_append": false, 00:13:29.382 "compare": false, 00:13:29.382 "compare_and_write": false, 00:13:29.382 "abort": true, 00:13:29.382 "seek_hole": false, 00:13:29.382 "seek_data": false, 00:13:29.382 "copy": true, 00:13:29.382 "nvme_iov_md": false 00:13:29.382 }, 00:13:29.382 "memory_domains": [ 00:13:29.382 { 00:13:29.382 "dma_device_id": "system", 00:13:29.382 "dma_device_type": 1 00:13:29.382 }, 00:13:29.382 { 00:13:29.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.382 "dma_device_type": 2 00:13:29.382 } 00:13:29.382 ], 00:13:29.382 "driver_specific": {} 00:13:29.382 } 00:13:29.382 ] 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.382 BaseBdev3 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.382 [ 00:13:29.382 { 00:13:29.382 "name": "BaseBdev3", 00:13:29.382 "aliases": [ 00:13:29.382 "891ce864-2826-4c60-aa22-fddf83d8ea88" 00:13:29.382 ], 00:13:29.382 "product_name": "Malloc disk", 00:13:29.382 "block_size": 512, 00:13:29.382 "num_blocks": 65536, 00:13:29.382 "uuid": "891ce864-2826-4c60-aa22-fddf83d8ea88", 00:13:29.382 "assigned_rate_limits": { 00:13:29.382 "rw_ios_per_sec": 0, 00:13:29.382 "rw_mbytes_per_sec": 0, 00:13:29.382 "r_mbytes_per_sec": 0, 00:13:29.382 "w_mbytes_per_sec": 0 00:13:29.382 }, 00:13:29.382 "claimed": false, 00:13:29.382 "zoned": false, 00:13:29.382 "supported_io_types": { 00:13:29.382 "read": true, 00:13:29.382 "write": true, 00:13:29.382 "unmap": true, 00:13:29.382 "flush": true, 00:13:29.382 "reset": true, 00:13:29.382 "nvme_admin": false, 00:13:29.382 "nvme_io": false, 00:13:29.382 "nvme_io_md": false, 00:13:29.382 "write_zeroes": true, 00:13:29.382 "zcopy": true, 00:13:29.382 "get_zone_info": false, 00:13:29.382 "zone_management": false, 00:13:29.382 "zone_append": false, 00:13:29.382 "compare": false, 00:13:29.382 "compare_and_write": false, 00:13:29.382 "abort": true, 00:13:29.382 "seek_hole": false, 00:13:29.382 "seek_data": false, 00:13:29.382 "copy": true, 00:13:29.382 "nvme_iov_md": false 00:13:29.382 }, 00:13:29.382 "memory_domains": [ 00:13:29.382 { 00:13:29.382 "dma_device_id": "system", 00:13:29.382 "dma_device_type": 1 00:13:29.382 }, 00:13:29.382 { 00:13:29.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.382 "dma_device_type": 2 00:13:29.382 } 00:13:29.382 ], 00:13:29.382 "driver_specific": {} 00:13:29.382 } 00:13:29.382 ] 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.382 12:56:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.382 [2024-11-26 12:56:47.004338] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.382 [2024-11-26 12:56:47.004381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.382 [2024-11-26 12:56:47.004402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.382 [2024-11-26 12:56:47.006235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.382 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.642 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.642 "name": "Existed_Raid", 00:13:29.642 "uuid": "3ae406d2-5390-443b-abf9-74bd61286ec9", 00:13:29.642 "strip_size_kb": 64, 00:13:29.642 "state": "configuring", 00:13:29.642 "raid_level": "raid5f", 00:13:29.642 "superblock": true, 00:13:29.642 "num_base_bdevs": 3, 00:13:29.642 "num_base_bdevs_discovered": 2, 00:13:29.642 "num_base_bdevs_operational": 3, 00:13:29.642 "base_bdevs_list": [ 00:13:29.642 { 00:13:29.642 "name": "BaseBdev1", 00:13:29.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.642 "is_configured": false, 00:13:29.642 "data_offset": 0, 00:13:29.642 "data_size": 0 00:13:29.642 }, 00:13:29.642 { 00:13:29.642 "name": "BaseBdev2", 00:13:29.642 "uuid": "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab", 00:13:29.642 "is_configured": true, 00:13:29.642 "data_offset": 2048, 00:13:29.642 "data_size": 63488 00:13:29.642 }, 00:13:29.642 { 00:13:29.642 "name": "BaseBdev3", 00:13:29.642 "uuid": "891ce864-2826-4c60-aa22-fddf83d8ea88", 00:13:29.642 "is_configured": true, 00:13:29.642 "data_offset": 2048, 00:13:29.642 "data_size": 63488 00:13:29.642 } 00:13:29.642 ] 00:13:29.642 }' 00:13:29.642 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.642 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.902 [2024-11-26 12:56:47.459558] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.902 "name": "Existed_Raid", 00:13:29.902 "uuid": "3ae406d2-5390-443b-abf9-74bd61286ec9", 00:13:29.902 "strip_size_kb": 64, 00:13:29.902 "state": "configuring", 00:13:29.902 "raid_level": "raid5f", 00:13:29.902 "superblock": true, 00:13:29.902 "num_base_bdevs": 3, 00:13:29.902 "num_base_bdevs_discovered": 1, 00:13:29.902 "num_base_bdevs_operational": 3, 00:13:29.902 "base_bdevs_list": [ 00:13:29.902 { 00:13:29.902 "name": "BaseBdev1", 00:13:29.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.902 "is_configured": false, 00:13:29.902 "data_offset": 0, 00:13:29.902 "data_size": 0 00:13:29.902 }, 00:13:29.902 { 00:13:29.902 "name": null, 00:13:29.902 "uuid": "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab", 00:13:29.902 "is_configured": false, 00:13:29.902 "data_offset": 0, 00:13:29.902 "data_size": 63488 00:13:29.902 }, 00:13:29.902 { 00:13:29.902 "name": "BaseBdev3", 00:13:29.902 "uuid": "891ce864-2826-4c60-aa22-fddf83d8ea88", 00:13:29.902 "is_configured": true, 00:13:29.902 "data_offset": 2048, 00:13:29.902 "data_size": 63488 00:13:29.902 } 00:13:29.902 ] 00:13:29.902 }' 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.902 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.479 [2024-11-26 12:56:47.925947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.479 BaseBdev1 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.479 [ 00:13:30.479 { 00:13:30.479 "name": "BaseBdev1", 00:13:30.479 "aliases": [ 00:13:30.479 "7e9309f0-bc24-4720-89f4-f839c79085c3" 00:13:30.479 ], 00:13:30.479 "product_name": "Malloc disk", 00:13:30.479 "block_size": 512, 00:13:30.479 "num_blocks": 65536, 00:13:30.479 "uuid": "7e9309f0-bc24-4720-89f4-f839c79085c3", 00:13:30.479 "assigned_rate_limits": { 00:13:30.479 "rw_ios_per_sec": 0, 00:13:30.479 "rw_mbytes_per_sec": 0, 00:13:30.479 "r_mbytes_per_sec": 0, 00:13:30.479 "w_mbytes_per_sec": 0 00:13:30.479 }, 00:13:30.479 "claimed": true, 00:13:30.479 "claim_type": "exclusive_write", 00:13:30.479 "zoned": false, 00:13:30.479 "supported_io_types": { 00:13:30.479 "read": true, 00:13:30.479 "write": true, 00:13:30.479 "unmap": true, 00:13:30.479 "flush": true, 00:13:30.479 "reset": true, 00:13:30.479 "nvme_admin": false, 00:13:30.479 "nvme_io": false, 00:13:30.479 "nvme_io_md": false, 00:13:30.479 "write_zeroes": true, 00:13:30.479 "zcopy": true, 00:13:30.479 "get_zone_info": false, 00:13:30.479 "zone_management": false, 00:13:30.479 "zone_append": false, 00:13:30.479 "compare": false, 00:13:30.479 "compare_and_write": false, 00:13:30.479 "abort": true, 00:13:30.479 "seek_hole": false, 00:13:30.479 "seek_data": false, 00:13:30.479 "copy": true, 00:13:30.479 "nvme_iov_md": false 00:13:30.479 }, 00:13:30.479 "memory_domains": [ 00:13:30.479 { 00:13:30.479 "dma_device_id": "system", 00:13:30.479 "dma_device_type": 1 00:13:30.479 }, 00:13:30.479 { 00:13:30.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.479 "dma_device_type": 2 00:13:30.479 } 00:13:30.479 ], 00:13:30.479 "driver_specific": {} 00:13:30.479 } 00:13:30.479 ] 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.479 12:56:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.479 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.479 "name": "Existed_Raid", 00:13:30.479 "uuid": "3ae406d2-5390-443b-abf9-74bd61286ec9", 00:13:30.479 "strip_size_kb": 64, 00:13:30.479 "state": "configuring", 00:13:30.479 "raid_level": "raid5f", 00:13:30.479 "superblock": true, 00:13:30.479 "num_base_bdevs": 3, 00:13:30.479 "num_base_bdevs_discovered": 2, 00:13:30.479 "num_base_bdevs_operational": 3, 00:13:30.479 "base_bdevs_list": [ 00:13:30.479 { 00:13:30.479 "name": "BaseBdev1", 00:13:30.479 "uuid": "7e9309f0-bc24-4720-89f4-f839c79085c3", 00:13:30.479 "is_configured": true, 00:13:30.479 "data_offset": 2048, 00:13:30.479 "data_size": 63488 00:13:30.479 }, 00:13:30.479 { 00:13:30.479 "name": null, 00:13:30.479 "uuid": "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab", 00:13:30.479 "is_configured": false, 00:13:30.479 "data_offset": 0, 00:13:30.479 "data_size": 63488 00:13:30.479 }, 00:13:30.479 { 00:13:30.479 "name": "BaseBdev3", 00:13:30.479 "uuid": "891ce864-2826-4c60-aa22-fddf83d8ea88", 00:13:30.479 "is_configured": true, 00:13:30.479 "data_offset": 2048, 00:13:30.479 "data_size": 63488 00:13:30.479 } 00:13:30.479 ] 00:13:30.479 }' 00:13:30.479 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.479 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.765 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.765 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:30.765 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.765 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.765 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.765 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:30.765 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:30.765 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.765 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.040 [2024-11-26 12:56:48.441119] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.040 "name": "Existed_Raid", 00:13:31.040 "uuid": "3ae406d2-5390-443b-abf9-74bd61286ec9", 00:13:31.040 "strip_size_kb": 64, 00:13:31.040 "state": "configuring", 00:13:31.040 "raid_level": "raid5f", 00:13:31.040 "superblock": true, 00:13:31.040 "num_base_bdevs": 3, 00:13:31.040 "num_base_bdevs_discovered": 1, 00:13:31.040 "num_base_bdevs_operational": 3, 00:13:31.040 "base_bdevs_list": [ 00:13:31.040 { 00:13:31.040 "name": "BaseBdev1", 00:13:31.040 "uuid": "7e9309f0-bc24-4720-89f4-f839c79085c3", 00:13:31.040 "is_configured": true, 00:13:31.040 "data_offset": 2048, 00:13:31.040 "data_size": 63488 00:13:31.040 }, 00:13:31.040 { 00:13:31.040 "name": null, 00:13:31.040 "uuid": "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab", 00:13:31.040 "is_configured": false, 00:13:31.040 "data_offset": 0, 00:13:31.040 "data_size": 63488 00:13:31.040 }, 00:13:31.040 { 00:13:31.040 "name": null, 00:13:31.040 "uuid": "891ce864-2826-4c60-aa22-fddf83d8ea88", 00:13:31.040 "is_configured": false, 00:13:31.040 "data_offset": 0, 00:13:31.040 "data_size": 63488 00:13:31.040 } 00:13:31.040 ] 00:13:31.040 }' 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.040 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.300 [2024-11-26 12:56:48.968239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.300 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.560 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.560 12:56:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.560 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.560 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.560 12:56:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.560 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.560 "name": "Existed_Raid", 00:13:31.560 "uuid": "3ae406d2-5390-443b-abf9-74bd61286ec9", 00:13:31.560 "strip_size_kb": 64, 00:13:31.560 "state": "configuring", 00:13:31.560 "raid_level": "raid5f", 00:13:31.560 "superblock": true, 00:13:31.560 "num_base_bdevs": 3, 00:13:31.560 "num_base_bdevs_discovered": 2, 00:13:31.560 "num_base_bdevs_operational": 3, 00:13:31.560 "base_bdevs_list": [ 00:13:31.560 { 00:13:31.560 "name": "BaseBdev1", 00:13:31.560 "uuid": "7e9309f0-bc24-4720-89f4-f839c79085c3", 00:13:31.560 "is_configured": true, 00:13:31.560 "data_offset": 2048, 00:13:31.560 "data_size": 63488 00:13:31.560 }, 00:13:31.560 { 00:13:31.560 "name": null, 00:13:31.560 "uuid": "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab", 00:13:31.560 "is_configured": false, 00:13:31.560 "data_offset": 0, 00:13:31.560 "data_size": 63488 00:13:31.560 }, 00:13:31.560 { 00:13:31.560 "name": "BaseBdev3", 00:13:31.560 "uuid": "891ce864-2826-4c60-aa22-fddf83d8ea88", 00:13:31.560 "is_configured": true, 00:13:31.560 "data_offset": 2048, 00:13:31.560 "data_size": 63488 00:13:31.560 } 00:13:31.560 ] 00:13:31.560 }' 00:13:31.560 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.560 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.819 [2024-11-26 12:56:49.463389] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.819 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.079 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.079 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.079 "name": "Existed_Raid", 00:13:32.079 "uuid": "3ae406d2-5390-443b-abf9-74bd61286ec9", 00:13:32.079 "strip_size_kb": 64, 00:13:32.079 "state": "configuring", 00:13:32.079 "raid_level": "raid5f", 00:13:32.079 "superblock": true, 00:13:32.079 "num_base_bdevs": 3, 00:13:32.079 "num_base_bdevs_discovered": 1, 00:13:32.079 "num_base_bdevs_operational": 3, 00:13:32.079 "base_bdevs_list": [ 00:13:32.079 { 00:13:32.079 "name": null, 00:13:32.079 "uuid": "7e9309f0-bc24-4720-89f4-f839c79085c3", 00:13:32.079 "is_configured": false, 00:13:32.079 "data_offset": 0, 00:13:32.079 "data_size": 63488 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "name": null, 00:13:32.079 "uuid": "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab", 00:13:32.079 "is_configured": false, 00:13:32.079 "data_offset": 0, 00:13:32.079 "data_size": 63488 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "name": "BaseBdev3", 00:13:32.079 "uuid": "891ce864-2826-4c60-aa22-fddf83d8ea88", 00:13:32.079 "is_configured": true, 00:13:32.079 "data_offset": 2048, 00:13:32.079 "data_size": 63488 00:13:32.079 } 00:13:32.079 ] 00:13:32.079 }' 00:13:32.079 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.079 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.339 [2024-11-26 12:56:49.973002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.339 12:56:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.339 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.599 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.599 "name": "Existed_Raid", 00:13:32.599 "uuid": "3ae406d2-5390-443b-abf9-74bd61286ec9", 00:13:32.599 "strip_size_kb": 64, 00:13:32.599 "state": "configuring", 00:13:32.599 "raid_level": "raid5f", 00:13:32.599 "superblock": true, 00:13:32.599 "num_base_bdevs": 3, 00:13:32.599 "num_base_bdevs_discovered": 2, 00:13:32.599 "num_base_bdevs_operational": 3, 00:13:32.599 "base_bdevs_list": [ 00:13:32.599 { 00:13:32.599 "name": null, 00:13:32.599 "uuid": "7e9309f0-bc24-4720-89f4-f839c79085c3", 00:13:32.599 "is_configured": false, 00:13:32.599 "data_offset": 0, 00:13:32.599 "data_size": 63488 00:13:32.599 }, 00:13:32.599 { 00:13:32.599 "name": "BaseBdev2", 00:13:32.599 "uuid": "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab", 00:13:32.599 "is_configured": true, 00:13:32.599 "data_offset": 2048, 00:13:32.599 "data_size": 63488 00:13:32.599 }, 00:13:32.599 { 00:13:32.599 "name": "BaseBdev3", 00:13:32.599 "uuid": "891ce864-2826-4c60-aa22-fddf83d8ea88", 00:13:32.599 "is_configured": true, 00:13:32.599 "data_offset": 2048, 00:13:32.599 "data_size": 63488 00:13:32.599 } 00:13:32.599 ] 00:13:32.599 }' 00:13:32.599 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.599 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.858 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.858 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.858 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.858 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7e9309f0-bc24-4720-89f4-f839c79085c3 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.859 [2024-11-26 12:56:50.487254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:32.859 [2024-11-26 12:56:50.487445] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:32.859 [2024-11-26 12:56:50.487467] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:32.859 NewBaseBdev 00:13:32.859 [2024-11-26 12:56:50.487762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:32.859 [2024-11-26 12:56:50.488187] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:32.859 [2024-11-26 12:56:50.488203] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:32.859 [2024-11-26 12:56:50.488305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.859 [ 00:13:32.859 { 00:13:32.859 "name": "NewBaseBdev", 00:13:32.859 "aliases": [ 00:13:32.859 "7e9309f0-bc24-4720-89f4-f839c79085c3" 00:13:32.859 ], 00:13:32.859 "product_name": "Malloc disk", 00:13:32.859 "block_size": 512, 00:13:32.859 "num_blocks": 65536, 00:13:32.859 "uuid": "7e9309f0-bc24-4720-89f4-f839c79085c3", 00:13:32.859 "assigned_rate_limits": { 00:13:32.859 "rw_ios_per_sec": 0, 00:13:32.859 "rw_mbytes_per_sec": 0, 00:13:32.859 "r_mbytes_per_sec": 0, 00:13:32.859 "w_mbytes_per_sec": 0 00:13:32.859 }, 00:13:32.859 "claimed": true, 00:13:32.859 "claim_type": "exclusive_write", 00:13:32.859 "zoned": false, 00:13:32.859 "supported_io_types": { 00:13:32.859 "read": true, 00:13:32.859 "write": true, 00:13:32.859 "unmap": true, 00:13:32.859 "flush": true, 00:13:32.859 "reset": true, 00:13:32.859 "nvme_admin": false, 00:13:32.859 "nvme_io": false, 00:13:32.859 "nvme_io_md": false, 00:13:32.859 "write_zeroes": true, 00:13:32.859 "zcopy": true, 00:13:32.859 "get_zone_info": false, 00:13:32.859 "zone_management": false, 00:13:32.859 "zone_append": false, 00:13:32.859 "compare": false, 00:13:32.859 "compare_and_write": false, 00:13:32.859 "abort": true, 00:13:32.859 "seek_hole": false, 00:13:32.859 "seek_data": false, 00:13:32.859 "copy": true, 00:13:32.859 "nvme_iov_md": false 00:13:32.859 }, 00:13:32.859 "memory_domains": [ 00:13:32.859 { 00:13:32.859 "dma_device_id": "system", 00:13:32.859 "dma_device_type": 1 00:13:32.859 }, 00:13:32.859 { 00:13:32.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.859 "dma_device_type": 2 00:13:32.859 } 00:13:32.859 ], 00:13:32.859 "driver_specific": {} 00:13:32.859 } 00:13:32.859 ] 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.859 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.119 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.119 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.119 "name": "Existed_Raid", 00:13:33.119 "uuid": "3ae406d2-5390-443b-abf9-74bd61286ec9", 00:13:33.119 "strip_size_kb": 64, 00:13:33.119 "state": "online", 00:13:33.119 "raid_level": "raid5f", 00:13:33.119 "superblock": true, 00:13:33.119 "num_base_bdevs": 3, 00:13:33.119 "num_base_bdevs_discovered": 3, 00:13:33.119 "num_base_bdevs_operational": 3, 00:13:33.119 "base_bdevs_list": [ 00:13:33.119 { 00:13:33.119 "name": "NewBaseBdev", 00:13:33.119 "uuid": "7e9309f0-bc24-4720-89f4-f839c79085c3", 00:13:33.119 "is_configured": true, 00:13:33.119 "data_offset": 2048, 00:13:33.119 "data_size": 63488 00:13:33.119 }, 00:13:33.119 { 00:13:33.119 "name": "BaseBdev2", 00:13:33.119 "uuid": "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab", 00:13:33.119 "is_configured": true, 00:13:33.119 "data_offset": 2048, 00:13:33.119 "data_size": 63488 00:13:33.119 }, 00:13:33.119 { 00:13:33.119 "name": "BaseBdev3", 00:13:33.119 "uuid": "891ce864-2826-4c60-aa22-fddf83d8ea88", 00:13:33.119 "is_configured": true, 00:13:33.119 "data_offset": 2048, 00:13:33.119 "data_size": 63488 00:13:33.119 } 00:13:33.119 ] 00:13:33.119 }' 00:13:33.119 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.119 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.379 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:33.379 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:33.379 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:33.379 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:33.379 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:33.379 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:33.379 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:33.379 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:33.379 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.379 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.379 [2024-11-26 12:56:50.970693] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.380 12:56:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.380 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:33.380 "name": "Existed_Raid", 00:13:33.380 "aliases": [ 00:13:33.380 "3ae406d2-5390-443b-abf9-74bd61286ec9" 00:13:33.380 ], 00:13:33.380 "product_name": "Raid Volume", 00:13:33.380 "block_size": 512, 00:13:33.380 "num_blocks": 126976, 00:13:33.380 "uuid": "3ae406d2-5390-443b-abf9-74bd61286ec9", 00:13:33.380 "assigned_rate_limits": { 00:13:33.380 "rw_ios_per_sec": 0, 00:13:33.380 "rw_mbytes_per_sec": 0, 00:13:33.380 "r_mbytes_per_sec": 0, 00:13:33.380 "w_mbytes_per_sec": 0 00:13:33.380 }, 00:13:33.380 "claimed": false, 00:13:33.380 "zoned": false, 00:13:33.380 "supported_io_types": { 00:13:33.380 "read": true, 00:13:33.380 "write": true, 00:13:33.380 "unmap": false, 00:13:33.380 "flush": false, 00:13:33.380 "reset": true, 00:13:33.380 "nvme_admin": false, 00:13:33.380 "nvme_io": false, 00:13:33.380 "nvme_io_md": false, 00:13:33.380 "write_zeroes": true, 00:13:33.380 "zcopy": false, 00:13:33.380 "get_zone_info": false, 00:13:33.380 "zone_management": false, 00:13:33.380 "zone_append": false, 00:13:33.380 "compare": false, 00:13:33.380 "compare_and_write": false, 00:13:33.380 "abort": false, 00:13:33.380 "seek_hole": false, 00:13:33.380 "seek_data": false, 00:13:33.380 "copy": false, 00:13:33.380 "nvme_iov_md": false 00:13:33.380 }, 00:13:33.380 "driver_specific": { 00:13:33.380 "raid": { 00:13:33.380 "uuid": "3ae406d2-5390-443b-abf9-74bd61286ec9", 00:13:33.380 "strip_size_kb": 64, 00:13:33.380 "state": "online", 00:13:33.380 "raid_level": "raid5f", 00:13:33.380 "superblock": true, 00:13:33.380 "num_base_bdevs": 3, 00:13:33.380 "num_base_bdevs_discovered": 3, 00:13:33.380 "num_base_bdevs_operational": 3, 00:13:33.380 "base_bdevs_list": [ 00:13:33.380 { 00:13:33.380 "name": "NewBaseBdev", 00:13:33.380 "uuid": "7e9309f0-bc24-4720-89f4-f839c79085c3", 00:13:33.380 "is_configured": true, 00:13:33.380 "data_offset": 2048, 00:13:33.380 "data_size": 63488 00:13:33.380 }, 00:13:33.380 { 00:13:33.380 "name": "BaseBdev2", 00:13:33.380 "uuid": "4cf3b9e2-78a7-45d8-98b7-475f523ad4ab", 00:13:33.380 "is_configured": true, 00:13:33.380 "data_offset": 2048, 00:13:33.380 "data_size": 63488 00:13:33.380 }, 00:13:33.380 { 00:13:33.380 "name": "BaseBdev3", 00:13:33.380 "uuid": "891ce864-2826-4c60-aa22-fddf83d8ea88", 00:13:33.380 "is_configured": true, 00:13:33.380 "data_offset": 2048, 00:13:33.380 "data_size": 63488 00:13:33.380 } 00:13:33.380 ] 00:13:33.380 } 00:13:33.380 } 00:13:33.380 }' 00:13:33.380 12:56:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:33.380 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:33.380 BaseBdev2 00:13:33.380 BaseBdev3' 00:13:33.380 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:33.640 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.641 [2024-11-26 12:56:51.246045] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:33.641 [2024-11-26 12:56:51.246079] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.641 [2024-11-26 12:56:51.246202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.641 [2024-11-26 12:56:51.246437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.641 [2024-11-26 12:56:51.246462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91250 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91250 ']' 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91250 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91250 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:33.641 killing process with pid 91250 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91250' 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91250 00:13:33.641 [2024-11-26 12:56:51.298096] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.641 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91250 00:13:33.900 [2024-11-26 12:56:51.329179] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.161 12:56:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:34.161 00:13:34.161 real 0m9.105s 00:13:34.161 user 0m15.387s 00:13:34.161 sys 0m2.032s 00:13:34.161 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:34.161 12:56:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.161 ************************************ 00:13:34.161 END TEST raid5f_state_function_test_sb 00:13:34.161 ************************************ 00:13:34.161 12:56:51 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:34.161 12:56:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:34.161 12:56:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.161 12:56:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.161 ************************************ 00:13:34.161 START TEST raid5f_superblock_test 00:13:34.161 ************************************ 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91863 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91863 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91863 ']' 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.161 12:56:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.161 [2024-11-26 12:56:51.745705] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:34.161 [2024-11-26 12:56:51.745842] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91863 ] 00:13:34.421 [2024-11-26 12:56:51.903726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.421 [2024-11-26 12:56:51.949407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.421 [2024-11-26 12:56:51.992901] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.421 [2024-11-26 12:56:51.992947] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.990 malloc1 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.990 [2024-11-26 12:56:52.630467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:34.990 [2024-11-26 12:56:52.630590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.990 [2024-11-26 12:56:52.630619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.990 [2024-11-26 12:56:52.630645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.990 [2024-11-26 12:56:52.633143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.990 [2024-11-26 12:56:52.633207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:34.990 pt1 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.990 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.250 malloc2 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.250 [2024-11-26 12:56:52.681898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:35.250 [2024-11-26 12:56:52.682017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.250 [2024-11-26 12:56:52.682061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:35.250 [2024-11-26 12:56:52.682092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.250 [2024-11-26 12:56:52.686935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.250 [2024-11-26 12:56:52.687019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:35.250 pt2 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.250 malloc3 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.250 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.251 [2024-11-26 12:56:52.718611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:35.251 [2024-11-26 12:56:52.718670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.251 [2024-11-26 12:56:52.718707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:35.251 [2024-11-26 12:56:52.718721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.251 [2024-11-26 12:56:52.721004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.251 [2024-11-26 12:56:52.721047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:35.251 pt3 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.251 [2024-11-26 12:56:52.730658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:35.251 [2024-11-26 12:56:52.732643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:35.251 [2024-11-26 12:56:52.732719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:35.251 [2024-11-26 12:56:52.732881] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:35.251 [2024-11-26 12:56:52.732901] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:35.251 [2024-11-26 12:56:52.733152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:35.251 [2024-11-26 12:56:52.733607] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:35.251 [2024-11-26 12:56:52.733633] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:35.251 [2024-11-26 12:56:52.733759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.251 "name": "raid_bdev1", 00:13:35.251 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:35.251 "strip_size_kb": 64, 00:13:35.251 "state": "online", 00:13:35.251 "raid_level": "raid5f", 00:13:35.251 "superblock": true, 00:13:35.251 "num_base_bdevs": 3, 00:13:35.251 "num_base_bdevs_discovered": 3, 00:13:35.251 "num_base_bdevs_operational": 3, 00:13:35.251 "base_bdevs_list": [ 00:13:35.251 { 00:13:35.251 "name": "pt1", 00:13:35.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.251 "is_configured": true, 00:13:35.251 "data_offset": 2048, 00:13:35.251 "data_size": 63488 00:13:35.251 }, 00:13:35.251 { 00:13:35.251 "name": "pt2", 00:13:35.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.251 "is_configured": true, 00:13:35.251 "data_offset": 2048, 00:13:35.251 "data_size": 63488 00:13:35.251 }, 00:13:35.251 { 00:13:35.251 "name": "pt3", 00:13:35.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.251 "is_configured": true, 00:13:35.251 "data_offset": 2048, 00:13:35.251 "data_size": 63488 00:13:35.251 } 00:13:35.251 ] 00:13:35.251 }' 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.251 12:56:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.511 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:35.511 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:35.511 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.511 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.511 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.511 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.511 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.511 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.511 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.511 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.511 [2024-11-26 12:56:53.175372] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.771 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.772 "name": "raid_bdev1", 00:13:35.772 "aliases": [ 00:13:35.772 "37e26b10-b92f-48cf-921d-64308dadec8c" 00:13:35.772 ], 00:13:35.772 "product_name": "Raid Volume", 00:13:35.772 "block_size": 512, 00:13:35.772 "num_blocks": 126976, 00:13:35.772 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:35.772 "assigned_rate_limits": { 00:13:35.772 "rw_ios_per_sec": 0, 00:13:35.772 "rw_mbytes_per_sec": 0, 00:13:35.772 "r_mbytes_per_sec": 0, 00:13:35.772 "w_mbytes_per_sec": 0 00:13:35.772 }, 00:13:35.772 "claimed": false, 00:13:35.772 "zoned": false, 00:13:35.772 "supported_io_types": { 00:13:35.772 "read": true, 00:13:35.772 "write": true, 00:13:35.772 "unmap": false, 00:13:35.772 "flush": false, 00:13:35.772 "reset": true, 00:13:35.772 "nvme_admin": false, 00:13:35.772 "nvme_io": false, 00:13:35.772 "nvme_io_md": false, 00:13:35.772 "write_zeroes": true, 00:13:35.772 "zcopy": false, 00:13:35.772 "get_zone_info": false, 00:13:35.772 "zone_management": false, 00:13:35.772 "zone_append": false, 00:13:35.772 "compare": false, 00:13:35.772 "compare_and_write": false, 00:13:35.772 "abort": false, 00:13:35.772 "seek_hole": false, 00:13:35.772 "seek_data": false, 00:13:35.772 "copy": false, 00:13:35.772 "nvme_iov_md": false 00:13:35.772 }, 00:13:35.772 "driver_specific": { 00:13:35.772 "raid": { 00:13:35.772 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:35.772 "strip_size_kb": 64, 00:13:35.772 "state": "online", 00:13:35.772 "raid_level": "raid5f", 00:13:35.772 "superblock": true, 00:13:35.772 "num_base_bdevs": 3, 00:13:35.772 "num_base_bdevs_discovered": 3, 00:13:35.772 "num_base_bdevs_operational": 3, 00:13:35.772 "base_bdevs_list": [ 00:13:35.772 { 00:13:35.772 "name": "pt1", 00:13:35.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.772 "is_configured": true, 00:13:35.772 "data_offset": 2048, 00:13:35.772 "data_size": 63488 00:13:35.772 }, 00:13:35.772 { 00:13:35.772 "name": "pt2", 00:13:35.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.772 "is_configured": true, 00:13:35.772 "data_offset": 2048, 00:13:35.772 "data_size": 63488 00:13:35.772 }, 00:13:35.772 { 00:13:35.772 "name": "pt3", 00:13:35.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.772 "is_configured": true, 00:13:35.772 "data_offset": 2048, 00:13:35.772 "data_size": 63488 00:13:35.772 } 00:13:35.772 ] 00:13:35.772 } 00:13:35.772 } 00:13:35.772 }' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:35.772 pt2 00:13:35.772 pt3' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.772 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:35.772 [2024-11-26 12:56:53.438851] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=37e26b10-b92f-48cf-921d-64308dadec8c 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 37e26b10-b92f-48cf-921d-64308dadec8c ']' 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.032 [2024-11-26 12:56:53.482597] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.032 [2024-11-26 12:56:53.482625] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.032 [2024-11-26 12:56:53.482728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.032 [2024-11-26 12:56:53.482813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.032 [2024-11-26 12:56:53.482830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.032 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 [2024-11-26 12:56:53.618392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:36.033 [2024-11-26 12:56:53.620690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:36.033 [2024-11-26 12:56:53.620748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:36.033 [2024-11-26 12:56:53.620807] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:36.033 [2024-11-26 12:56:53.620855] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:36.033 [2024-11-26 12:56:53.620884] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:36.033 [2024-11-26 12:56:53.620900] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.033 [2024-11-26 12:56:53.620916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:13:36.033 request: 00:13:36.033 { 00:13:36.033 "name": "raid_bdev1", 00:13:36.033 "raid_level": "raid5f", 00:13:36.033 "base_bdevs": [ 00:13:36.033 "malloc1", 00:13:36.033 "malloc2", 00:13:36.033 "malloc3" 00:13:36.033 ], 00:13:36.033 "strip_size_kb": 64, 00:13:36.033 "superblock": false, 00:13:36.033 "method": "bdev_raid_create", 00:13:36.033 "req_id": 1 00:13:36.033 } 00:13:36.033 Got JSON-RPC error response 00:13:36.033 response: 00:13:36.033 { 00:13:36.033 "code": -17, 00:13:36.033 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:36.033 } 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 [2024-11-26 12:56:53.666296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:36.033 [2024-11-26 12:56:53.666352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.033 [2024-11-26 12:56:53.666371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:36.033 [2024-11-26 12:56:53.666385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.033 [2024-11-26 12:56:53.668776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.033 [2024-11-26 12:56:53.668833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:36.033 [2024-11-26 12:56:53.668907] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:36.033 [2024-11-26 12:56:53.668958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:36.033 pt1 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.292 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.292 "name": "raid_bdev1", 00:13:36.292 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:36.292 "strip_size_kb": 64, 00:13:36.292 "state": "configuring", 00:13:36.292 "raid_level": "raid5f", 00:13:36.292 "superblock": true, 00:13:36.292 "num_base_bdevs": 3, 00:13:36.292 "num_base_bdevs_discovered": 1, 00:13:36.292 "num_base_bdevs_operational": 3, 00:13:36.292 "base_bdevs_list": [ 00:13:36.292 { 00:13:36.292 "name": "pt1", 00:13:36.292 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.292 "is_configured": true, 00:13:36.292 "data_offset": 2048, 00:13:36.292 "data_size": 63488 00:13:36.292 }, 00:13:36.292 { 00:13:36.292 "name": null, 00:13:36.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.293 "is_configured": false, 00:13:36.293 "data_offset": 2048, 00:13:36.293 "data_size": 63488 00:13:36.293 }, 00:13:36.293 { 00:13:36.293 "name": null, 00:13:36.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.293 "is_configured": false, 00:13:36.293 "data_offset": 2048, 00:13:36.293 "data_size": 63488 00:13:36.293 } 00:13:36.293 ] 00:13:36.293 }' 00:13:36.293 12:56:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.293 12:56:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.552 [2024-11-26 12:56:54.069647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.552 [2024-11-26 12:56:54.069710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.552 [2024-11-26 12:56:54.069734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:36.552 [2024-11-26 12:56:54.069754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.552 [2024-11-26 12:56:54.070229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.552 [2024-11-26 12:56:54.070265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.552 [2024-11-26 12:56:54.070342] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:36.552 [2024-11-26 12:56:54.070372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.552 pt2 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.552 [2024-11-26 12:56:54.077665] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.552 "name": "raid_bdev1", 00:13:36.552 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:36.552 "strip_size_kb": 64, 00:13:36.552 "state": "configuring", 00:13:36.552 "raid_level": "raid5f", 00:13:36.552 "superblock": true, 00:13:36.552 "num_base_bdevs": 3, 00:13:36.552 "num_base_bdevs_discovered": 1, 00:13:36.552 "num_base_bdevs_operational": 3, 00:13:36.552 "base_bdevs_list": [ 00:13:36.552 { 00:13:36.552 "name": "pt1", 00:13:36.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.552 "is_configured": true, 00:13:36.552 "data_offset": 2048, 00:13:36.552 "data_size": 63488 00:13:36.552 }, 00:13:36.552 { 00:13:36.552 "name": null, 00:13:36.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.552 "is_configured": false, 00:13:36.552 "data_offset": 0, 00:13:36.552 "data_size": 63488 00:13:36.552 }, 00:13:36.552 { 00:13:36.552 "name": null, 00:13:36.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.552 "is_configured": false, 00:13:36.552 "data_offset": 2048, 00:13:36.552 "data_size": 63488 00:13:36.552 } 00:13:36.552 ] 00:13:36.552 }' 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.552 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.120 [2024-11-26 12:56:54.520850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:37.120 [2024-11-26 12:56:54.520968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.120 [2024-11-26 12:56:54.521009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:37.120 [2024-11-26 12:56:54.521042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.120 [2024-11-26 12:56:54.521469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.120 [2024-11-26 12:56:54.521535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:37.120 [2024-11-26 12:56:54.521636] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:37.120 [2024-11-26 12:56:54.521690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:37.120 pt2 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.120 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.120 [2024-11-26 12:56:54.532817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:37.120 [2024-11-26 12:56:54.532865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.120 [2024-11-26 12:56:54.532901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:37.120 [2024-11-26 12:56:54.532911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.120 [2024-11-26 12:56:54.533288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.120 [2024-11-26 12:56:54.533306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:37.120 [2024-11-26 12:56:54.533367] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:37.120 [2024-11-26 12:56:54.533385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:37.120 [2024-11-26 12:56:54.533490] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:37.120 [2024-11-26 12:56:54.533500] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:37.120 [2024-11-26 12:56:54.533737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:37.121 [2024-11-26 12:56:54.534193] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:37.121 [2024-11-26 12:56:54.534211] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:13:37.121 [2024-11-26 12:56:54.534320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.121 pt3 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.121 "name": "raid_bdev1", 00:13:37.121 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:37.121 "strip_size_kb": 64, 00:13:37.121 "state": "online", 00:13:37.121 "raid_level": "raid5f", 00:13:37.121 "superblock": true, 00:13:37.121 "num_base_bdevs": 3, 00:13:37.121 "num_base_bdevs_discovered": 3, 00:13:37.121 "num_base_bdevs_operational": 3, 00:13:37.121 "base_bdevs_list": [ 00:13:37.121 { 00:13:37.121 "name": "pt1", 00:13:37.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.121 "is_configured": true, 00:13:37.121 "data_offset": 2048, 00:13:37.121 "data_size": 63488 00:13:37.121 }, 00:13:37.121 { 00:13:37.121 "name": "pt2", 00:13:37.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.121 "is_configured": true, 00:13:37.121 "data_offset": 2048, 00:13:37.121 "data_size": 63488 00:13:37.121 }, 00:13:37.121 { 00:13:37.121 "name": "pt3", 00:13:37.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.121 "is_configured": true, 00:13:37.121 "data_offset": 2048, 00:13:37.121 "data_size": 63488 00:13:37.121 } 00:13:37.121 ] 00:13:37.121 }' 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.121 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.379 [2024-11-26 12:56:54.988326] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.379 12:56:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.379 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.379 "name": "raid_bdev1", 00:13:37.379 "aliases": [ 00:13:37.379 "37e26b10-b92f-48cf-921d-64308dadec8c" 00:13:37.379 ], 00:13:37.379 "product_name": "Raid Volume", 00:13:37.379 "block_size": 512, 00:13:37.379 "num_blocks": 126976, 00:13:37.379 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:37.379 "assigned_rate_limits": { 00:13:37.379 "rw_ios_per_sec": 0, 00:13:37.379 "rw_mbytes_per_sec": 0, 00:13:37.379 "r_mbytes_per_sec": 0, 00:13:37.379 "w_mbytes_per_sec": 0 00:13:37.379 }, 00:13:37.379 "claimed": false, 00:13:37.379 "zoned": false, 00:13:37.379 "supported_io_types": { 00:13:37.379 "read": true, 00:13:37.379 "write": true, 00:13:37.379 "unmap": false, 00:13:37.379 "flush": false, 00:13:37.379 "reset": true, 00:13:37.379 "nvme_admin": false, 00:13:37.379 "nvme_io": false, 00:13:37.379 "nvme_io_md": false, 00:13:37.379 "write_zeroes": true, 00:13:37.379 "zcopy": false, 00:13:37.379 "get_zone_info": false, 00:13:37.379 "zone_management": false, 00:13:37.379 "zone_append": false, 00:13:37.379 "compare": false, 00:13:37.379 "compare_and_write": false, 00:13:37.379 "abort": false, 00:13:37.379 "seek_hole": false, 00:13:37.379 "seek_data": false, 00:13:37.379 "copy": false, 00:13:37.379 "nvme_iov_md": false 00:13:37.379 }, 00:13:37.379 "driver_specific": { 00:13:37.379 "raid": { 00:13:37.379 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:37.379 "strip_size_kb": 64, 00:13:37.379 "state": "online", 00:13:37.379 "raid_level": "raid5f", 00:13:37.379 "superblock": true, 00:13:37.379 "num_base_bdevs": 3, 00:13:37.379 "num_base_bdevs_discovered": 3, 00:13:37.379 "num_base_bdevs_operational": 3, 00:13:37.379 "base_bdevs_list": [ 00:13:37.379 { 00:13:37.379 "name": "pt1", 00:13:37.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.379 "is_configured": true, 00:13:37.379 "data_offset": 2048, 00:13:37.379 "data_size": 63488 00:13:37.379 }, 00:13:37.379 { 00:13:37.379 "name": "pt2", 00:13:37.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.379 "is_configured": true, 00:13:37.379 "data_offset": 2048, 00:13:37.379 "data_size": 63488 00:13:37.379 }, 00:13:37.379 { 00:13:37.379 "name": "pt3", 00:13:37.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.379 "is_configured": true, 00:13:37.379 "data_offset": 2048, 00:13:37.379 "data_size": 63488 00:13:37.379 } 00:13:37.379 ] 00:13:37.379 } 00:13:37.379 } 00:13:37.379 }' 00:13:37.379 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:37.639 pt2 00:13:37.639 pt3' 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.639 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.640 [2024-11-26 12:56:55.255813] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 37e26b10-b92f-48cf-921d-64308dadec8c '!=' 37e26b10-b92f-48cf-921d-64308dadec8c ']' 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.640 [2024-11-26 12:56:55.303613] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.640 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.899 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.899 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.899 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.899 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.899 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.899 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.899 "name": "raid_bdev1", 00:13:37.899 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:37.899 "strip_size_kb": 64, 00:13:37.899 "state": "online", 00:13:37.899 "raid_level": "raid5f", 00:13:37.899 "superblock": true, 00:13:37.899 "num_base_bdevs": 3, 00:13:37.899 "num_base_bdevs_discovered": 2, 00:13:37.899 "num_base_bdevs_operational": 2, 00:13:37.899 "base_bdevs_list": [ 00:13:37.899 { 00:13:37.899 "name": null, 00:13:37.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.899 "is_configured": false, 00:13:37.899 "data_offset": 0, 00:13:37.899 "data_size": 63488 00:13:37.899 }, 00:13:37.899 { 00:13:37.899 "name": "pt2", 00:13:37.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.899 "is_configured": true, 00:13:37.899 "data_offset": 2048, 00:13:37.899 "data_size": 63488 00:13:37.899 }, 00:13:37.899 { 00:13:37.899 "name": "pt3", 00:13:37.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.899 "is_configured": true, 00:13:37.899 "data_offset": 2048, 00:13:37.899 "data_size": 63488 00:13:37.899 } 00:13:37.899 ] 00:13:37.899 }' 00:13:37.899 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.899 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.159 [2024-11-26 12:56:55.698852] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.159 [2024-11-26 12:56:55.698933] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.159 [2024-11-26 12:56:55.699017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.159 [2024-11-26 12:56:55.699097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.159 [2024-11-26 12:56:55.699136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.159 [2024-11-26 12:56:55.782718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.159 [2024-11-26 12:56:55.782771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.159 [2024-11-26 12:56:55.782794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:38.159 [2024-11-26 12:56:55.782806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.159 [2024-11-26 12:56:55.785202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.159 [2024-11-26 12:56:55.785284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.159 [2024-11-26 12:56:55.785372] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:38.159 [2024-11-26 12:56:55.785417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.159 pt2 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:38.159 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.160 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.420 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.420 "name": "raid_bdev1", 00:13:38.420 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:38.420 "strip_size_kb": 64, 00:13:38.420 "state": "configuring", 00:13:38.420 "raid_level": "raid5f", 00:13:38.420 "superblock": true, 00:13:38.420 "num_base_bdevs": 3, 00:13:38.420 "num_base_bdevs_discovered": 1, 00:13:38.420 "num_base_bdevs_operational": 2, 00:13:38.420 "base_bdevs_list": [ 00:13:38.420 { 00:13:38.420 "name": null, 00:13:38.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.420 "is_configured": false, 00:13:38.420 "data_offset": 2048, 00:13:38.420 "data_size": 63488 00:13:38.420 }, 00:13:38.420 { 00:13:38.420 "name": "pt2", 00:13:38.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.420 "is_configured": true, 00:13:38.420 "data_offset": 2048, 00:13:38.420 "data_size": 63488 00:13:38.420 }, 00:13:38.420 { 00:13:38.420 "name": null, 00:13:38.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.420 "is_configured": false, 00:13:38.420 "data_offset": 2048, 00:13:38.420 "data_size": 63488 00:13:38.420 } 00:13:38.420 ] 00:13:38.420 }' 00:13:38.420 12:56:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.420 12:56:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.679 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:38.679 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:38.679 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:38.679 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:38.679 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.679 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.679 [2024-11-26 12:56:56.209990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:38.679 [2024-11-26 12:56:56.210099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.679 [2024-11-26 12:56:56.210145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:38.679 [2024-11-26 12:56:56.210190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.679 [2024-11-26 12:56:56.210654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.679 [2024-11-26 12:56:56.210719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:38.679 [2024-11-26 12:56:56.210828] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:38.679 [2024-11-26 12:56:56.210896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:38.679 [2024-11-26 12:56:56.211044] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:38.679 [2024-11-26 12:56:56.211087] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:38.679 [2024-11-26 12:56:56.211372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:38.679 [2024-11-26 12:56:56.211875] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:38.679 [2024-11-26 12:56:56.211892] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:13:38.679 [2024-11-26 12:56:56.212145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.679 pt3 00:13:38.679 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.679 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.680 "name": "raid_bdev1", 00:13:38.680 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:38.680 "strip_size_kb": 64, 00:13:38.680 "state": "online", 00:13:38.680 "raid_level": "raid5f", 00:13:38.680 "superblock": true, 00:13:38.680 "num_base_bdevs": 3, 00:13:38.680 "num_base_bdevs_discovered": 2, 00:13:38.680 "num_base_bdevs_operational": 2, 00:13:38.680 "base_bdevs_list": [ 00:13:38.680 { 00:13:38.680 "name": null, 00:13:38.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.680 "is_configured": false, 00:13:38.680 "data_offset": 2048, 00:13:38.680 "data_size": 63488 00:13:38.680 }, 00:13:38.680 { 00:13:38.680 "name": "pt2", 00:13:38.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.680 "is_configured": true, 00:13:38.680 "data_offset": 2048, 00:13:38.680 "data_size": 63488 00:13:38.680 }, 00:13:38.680 { 00:13:38.680 "name": "pt3", 00:13:38.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.680 "is_configured": true, 00:13:38.680 "data_offset": 2048, 00:13:38.680 "data_size": 63488 00:13:38.680 } 00:13:38.680 ] 00:13:38.680 }' 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.680 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.248 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:39.248 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.248 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.248 [2024-11-26 12:56:56.673369] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.248 [2024-11-26 12:56:56.673451] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.248 [2024-11-26 12:56:56.673540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.248 [2024-11-26 12:56:56.673613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.248 [2024-11-26 12:56:56.673698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:13:39.248 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.248 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:39.248 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.248 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.248 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.249 [2024-11-26 12:56:56.725304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:39.249 [2024-11-26 12:56:56.725421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.249 [2024-11-26 12:56:56.725458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:39.249 [2024-11-26 12:56:56.725495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.249 [2024-11-26 12:56:56.727839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.249 [2024-11-26 12:56:56.727923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:39.249 [2024-11-26 12:56:56.728032] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:39.249 [2024-11-26 12:56:56.728102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:39.249 [2024-11-26 12:56:56.728254] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:39.249 [2024-11-26 12:56:56.728329] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.249 [2024-11-26 12:56:56.728384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:13:39.249 [2024-11-26 12:56:56.728482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.249 pt1 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.249 "name": "raid_bdev1", 00:13:39.249 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:39.249 "strip_size_kb": 64, 00:13:39.249 "state": "configuring", 00:13:39.249 "raid_level": "raid5f", 00:13:39.249 "superblock": true, 00:13:39.249 "num_base_bdevs": 3, 00:13:39.249 "num_base_bdevs_discovered": 1, 00:13:39.249 "num_base_bdevs_operational": 2, 00:13:39.249 "base_bdevs_list": [ 00:13:39.249 { 00:13:39.249 "name": null, 00:13:39.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.249 "is_configured": false, 00:13:39.249 "data_offset": 2048, 00:13:39.249 "data_size": 63488 00:13:39.249 }, 00:13:39.249 { 00:13:39.249 "name": "pt2", 00:13:39.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.249 "is_configured": true, 00:13:39.249 "data_offset": 2048, 00:13:39.249 "data_size": 63488 00:13:39.249 }, 00:13:39.249 { 00:13:39.249 "name": null, 00:13:39.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.249 "is_configured": false, 00:13:39.249 "data_offset": 2048, 00:13:39.249 "data_size": 63488 00:13:39.249 } 00:13:39.249 ] 00:13:39.249 }' 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.249 12:56:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.820 [2024-11-26 12:56:57.252395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:39.820 [2024-11-26 12:56:57.252466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.820 [2024-11-26 12:56:57.252487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:39.820 [2024-11-26 12:56:57.252501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.820 [2024-11-26 12:56:57.252959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.820 [2024-11-26 12:56:57.252984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:39.820 [2024-11-26 12:56:57.253066] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:39.820 [2024-11-26 12:56:57.253096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:39.820 [2024-11-26 12:56:57.253241] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:39.820 [2024-11-26 12:56:57.253257] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:39.820 [2024-11-26 12:56:57.253520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:39.820 [2024-11-26 12:56:57.254059] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:39.820 [2024-11-26 12:56:57.254080] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:39.820 [2024-11-26 12:56:57.254289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.820 pt3 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.820 "name": "raid_bdev1", 00:13:39.820 "uuid": "37e26b10-b92f-48cf-921d-64308dadec8c", 00:13:39.820 "strip_size_kb": 64, 00:13:39.820 "state": "online", 00:13:39.820 "raid_level": "raid5f", 00:13:39.820 "superblock": true, 00:13:39.820 "num_base_bdevs": 3, 00:13:39.820 "num_base_bdevs_discovered": 2, 00:13:39.820 "num_base_bdevs_operational": 2, 00:13:39.820 "base_bdevs_list": [ 00:13:39.820 { 00:13:39.820 "name": null, 00:13:39.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.820 "is_configured": false, 00:13:39.820 "data_offset": 2048, 00:13:39.820 "data_size": 63488 00:13:39.820 }, 00:13:39.820 { 00:13:39.820 "name": "pt2", 00:13:39.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.820 "is_configured": true, 00:13:39.820 "data_offset": 2048, 00:13:39.820 "data_size": 63488 00:13:39.820 }, 00:13:39.820 { 00:13:39.820 "name": "pt3", 00:13:39.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.820 "is_configured": true, 00:13:39.820 "data_offset": 2048, 00:13:39.820 "data_size": 63488 00:13:39.820 } 00:13:39.820 ] 00:13:39.820 }' 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.820 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.080 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:40.080 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:40.080 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.080 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.080 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:40.340 [2024-11-26 12:56:57.775873] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 37e26b10-b92f-48cf-921d-64308dadec8c '!=' 37e26b10-b92f-48cf-921d-64308dadec8c ']' 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91863 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91863 ']' 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91863 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91863 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91863' 00:13:40.340 killing process with pid 91863 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91863 00:13:40.340 [2024-11-26 12:56:57.849234] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.340 [2024-11-26 12:56:57.849374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.340 12:56:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91863 00:13:40.340 [2024-11-26 12:56:57.849477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.340 [2024-11-26 12:56:57.849489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:40.340 [2024-11-26 12:56:57.909829] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.910 12:56:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:40.910 00:13:40.910 real 0m6.627s 00:13:40.910 user 0m10.881s 00:13:40.910 sys 0m1.444s 00:13:40.910 12:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.910 ************************************ 00:13:40.910 END TEST raid5f_superblock_test 00:13:40.910 ************************************ 00:13:40.910 12:56:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.910 12:56:58 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:40.910 12:56:58 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:40.910 12:56:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:40.910 12:56:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.911 12:56:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.911 ************************************ 00:13:40.911 START TEST raid5f_rebuild_test 00:13:40.911 ************************************ 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92296 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92296 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92296 ']' 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.911 12:56:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.911 [2024-11-26 12:56:58.462160] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:40.911 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:40.911 Zero copy mechanism will not be used. 00:13:40.911 [2024-11-26 12:56:58.462403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92296 ] 00:13:41.171 [2024-11-26 12:56:58.628416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.171 [2024-11-26 12:56:58.697595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.171 [2024-11-26 12:56:58.773553] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.171 [2024-11-26 12:56:58.773691] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.740 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.740 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:41.740 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.740 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:41.740 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.740 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.740 BaseBdev1_malloc 00:13:41.740 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.740 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:41.740 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.740 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.740 [2024-11-26 12:56:59.311908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:41.740 [2024-11-26 12:56:59.312017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.740 [2024-11-26 12:56:59.312059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:41.741 [2024-11-26 12:56:59.312087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.741 [2024-11-26 12:56:59.314561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.741 [2024-11-26 12:56:59.314601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:41.741 BaseBdev1 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.741 BaseBdev2_malloc 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.741 [2024-11-26 12:56:59.366102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:41.741 [2024-11-26 12:56:59.366252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.741 [2024-11-26 12:56:59.366310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:41.741 [2024-11-26 12:56:59.366357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.741 [2024-11-26 12:56:59.371248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.741 [2024-11-26 12:56:59.371305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:41.741 BaseBdev2 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.741 BaseBdev3_malloc 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.741 [2024-11-26 12:56:59.402731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:41.741 [2024-11-26 12:56:59.402788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.741 [2024-11-26 12:56:59.402837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:41.741 [2024-11-26 12:56:59.402848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.741 [2024-11-26 12:56:59.405223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.741 [2024-11-26 12:56:59.405309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:41.741 BaseBdev3 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.741 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.000 spare_malloc 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.000 spare_delay 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.000 [2024-11-26 12:56:59.449133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:42.000 [2024-11-26 12:56:59.449269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.000 [2024-11-26 12:56:59.449305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:42.000 [2024-11-26 12:56:59.449316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.000 [2024-11-26 12:56:59.451634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.000 [2024-11-26 12:56:59.451696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:42.000 spare 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.000 [2024-11-26 12:56:59.461195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.000 [2024-11-26 12:56:59.463162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.000 [2024-11-26 12:56:59.463250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.000 [2024-11-26 12:56:59.463341] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:42.000 [2024-11-26 12:56:59.463353] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:42.000 [2024-11-26 12:56:59.463636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:42.000 [2024-11-26 12:56:59.464111] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:42.000 [2024-11-26 12:56:59.464123] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:42.000 [2024-11-26 12:56:59.464275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.000 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.000 "name": "raid_bdev1", 00:13:42.000 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:42.000 "strip_size_kb": 64, 00:13:42.000 "state": "online", 00:13:42.000 "raid_level": "raid5f", 00:13:42.000 "superblock": false, 00:13:42.000 "num_base_bdevs": 3, 00:13:42.000 "num_base_bdevs_discovered": 3, 00:13:42.000 "num_base_bdevs_operational": 3, 00:13:42.000 "base_bdevs_list": [ 00:13:42.000 { 00:13:42.000 "name": "BaseBdev1", 00:13:42.000 "uuid": "ada66cc0-41ee-5768-a9cd-0f0a160cb26a", 00:13:42.000 "is_configured": true, 00:13:42.000 "data_offset": 0, 00:13:42.000 "data_size": 65536 00:13:42.000 }, 00:13:42.000 { 00:13:42.000 "name": "BaseBdev2", 00:13:42.000 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:42.000 "is_configured": true, 00:13:42.000 "data_offset": 0, 00:13:42.000 "data_size": 65536 00:13:42.000 }, 00:13:42.000 { 00:13:42.001 "name": "BaseBdev3", 00:13:42.001 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:42.001 "is_configured": true, 00:13:42.001 "data_offset": 0, 00:13:42.001 "data_size": 65536 00:13:42.001 } 00:13:42.001 ] 00:13:42.001 }' 00:13:42.001 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.001 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.260 [2024-11-26 12:56:59.869970] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.260 12:56:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.519 12:56:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:42.519 [2024-11-26 12:57:00.145459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:42.519 /dev/nbd0 00:13:42.519 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:42.519 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:42.519 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:42.520 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:42.520 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:42.520 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:42.520 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:42.520 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:42.520 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:42.520 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:42.520 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.780 1+0 records in 00:13:42.780 1+0 records out 00:13:42.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369388 s, 11.1 MB/s 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:42.780 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:43.039 512+0 records in 00:13:43.039 512+0 records out 00:13:43.039 67108864 bytes (67 MB, 64 MiB) copied, 0.307993 s, 218 MB/s 00:13:43.039 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:43.039 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.039 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:43.039 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.039 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:43.039 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.039 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:43.039 [2024-11-26 12:57:00.708931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.298 [2024-11-26 12:57:00.740965] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.298 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.298 "name": "raid_bdev1", 00:13:43.298 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:43.298 "strip_size_kb": 64, 00:13:43.298 "state": "online", 00:13:43.298 "raid_level": "raid5f", 00:13:43.298 "superblock": false, 00:13:43.298 "num_base_bdevs": 3, 00:13:43.298 "num_base_bdevs_discovered": 2, 00:13:43.298 "num_base_bdevs_operational": 2, 00:13:43.298 "base_bdevs_list": [ 00:13:43.298 { 00:13:43.298 "name": null, 00:13:43.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.299 "is_configured": false, 00:13:43.299 "data_offset": 0, 00:13:43.299 "data_size": 65536 00:13:43.299 }, 00:13:43.299 { 00:13:43.299 "name": "BaseBdev2", 00:13:43.299 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:43.299 "is_configured": true, 00:13:43.299 "data_offset": 0, 00:13:43.299 "data_size": 65536 00:13:43.299 }, 00:13:43.299 { 00:13:43.299 "name": "BaseBdev3", 00:13:43.299 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:43.299 "is_configured": true, 00:13:43.299 "data_offset": 0, 00:13:43.299 "data_size": 65536 00:13:43.299 } 00:13:43.299 ] 00:13:43.299 }' 00:13:43.299 12:57:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.299 12:57:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.558 12:57:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:43.558 12:57:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.558 12:57:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.558 [2024-11-26 12:57:01.168327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.558 [2024-11-26 12:57:01.174994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:13:43.558 12:57:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.558 12:57:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:43.558 [2024-11-26 12:57:01.177509] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.939 "name": "raid_bdev1", 00:13:44.939 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:44.939 "strip_size_kb": 64, 00:13:44.939 "state": "online", 00:13:44.939 "raid_level": "raid5f", 00:13:44.939 "superblock": false, 00:13:44.939 "num_base_bdevs": 3, 00:13:44.939 "num_base_bdevs_discovered": 3, 00:13:44.939 "num_base_bdevs_operational": 3, 00:13:44.939 "process": { 00:13:44.939 "type": "rebuild", 00:13:44.939 "target": "spare", 00:13:44.939 "progress": { 00:13:44.939 "blocks": 20480, 00:13:44.939 "percent": 15 00:13:44.939 } 00:13:44.939 }, 00:13:44.939 "base_bdevs_list": [ 00:13:44.939 { 00:13:44.939 "name": "spare", 00:13:44.939 "uuid": "ddf7fc8f-0471-517e-b70f-af0414f303bc", 00:13:44.939 "is_configured": true, 00:13:44.939 "data_offset": 0, 00:13:44.939 "data_size": 65536 00:13:44.939 }, 00:13:44.939 { 00:13:44.939 "name": "BaseBdev2", 00:13:44.939 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:44.939 "is_configured": true, 00:13:44.939 "data_offset": 0, 00:13:44.939 "data_size": 65536 00:13:44.939 }, 00:13:44.939 { 00:13:44.939 "name": "BaseBdev3", 00:13:44.939 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:44.939 "is_configured": true, 00:13:44.939 "data_offset": 0, 00:13:44.939 "data_size": 65536 00:13:44.939 } 00:13:44.939 ] 00:13:44.939 }' 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.939 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.940 [2024-11-26 12:57:02.317347] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.940 [2024-11-26 12:57:02.386308] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:44.940 [2024-11-26 12:57:02.386378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.940 [2024-11-26 12:57:02.386396] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.940 [2024-11-26 12:57:02.386410] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.940 "name": "raid_bdev1", 00:13:44.940 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:44.940 "strip_size_kb": 64, 00:13:44.940 "state": "online", 00:13:44.940 "raid_level": "raid5f", 00:13:44.940 "superblock": false, 00:13:44.940 "num_base_bdevs": 3, 00:13:44.940 "num_base_bdevs_discovered": 2, 00:13:44.940 "num_base_bdevs_operational": 2, 00:13:44.940 "base_bdevs_list": [ 00:13:44.940 { 00:13:44.940 "name": null, 00:13:44.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.940 "is_configured": false, 00:13:44.940 "data_offset": 0, 00:13:44.940 "data_size": 65536 00:13:44.940 }, 00:13:44.940 { 00:13:44.940 "name": "BaseBdev2", 00:13:44.940 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:44.940 "is_configured": true, 00:13:44.940 "data_offset": 0, 00:13:44.940 "data_size": 65536 00:13:44.940 }, 00:13:44.940 { 00:13:44.940 "name": "BaseBdev3", 00:13:44.940 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:44.940 "is_configured": true, 00:13:44.940 "data_offset": 0, 00:13:44.940 "data_size": 65536 00:13:44.940 } 00:13:44.940 ] 00:13:44.940 }' 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.940 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.200 "name": "raid_bdev1", 00:13:45.200 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:45.200 "strip_size_kb": 64, 00:13:45.200 "state": "online", 00:13:45.200 "raid_level": "raid5f", 00:13:45.200 "superblock": false, 00:13:45.200 "num_base_bdevs": 3, 00:13:45.200 "num_base_bdevs_discovered": 2, 00:13:45.200 "num_base_bdevs_operational": 2, 00:13:45.200 "base_bdevs_list": [ 00:13:45.200 { 00:13:45.200 "name": null, 00:13:45.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.200 "is_configured": false, 00:13:45.200 "data_offset": 0, 00:13:45.200 "data_size": 65536 00:13:45.200 }, 00:13:45.200 { 00:13:45.200 "name": "BaseBdev2", 00:13:45.200 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:45.200 "is_configured": true, 00:13:45.200 "data_offset": 0, 00:13:45.200 "data_size": 65536 00:13:45.200 }, 00:13:45.200 { 00:13:45.200 "name": "BaseBdev3", 00:13:45.200 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:45.200 "is_configured": true, 00:13:45.200 "data_offset": 0, 00:13:45.200 "data_size": 65536 00:13:45.200 } 00:13:45.200 ] 00:13:45.200 }' 00:13:45.200 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.460 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.460 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.460 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.460 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:45.460 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.460 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.460 [2024-11-26 12:57:02.942711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.460 [2024-11-26 12:57:02.948935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:13:45.460 12:57:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.460 12:57:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:45.460 [2024-11-26 12:57:02.951338] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.400 12:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.400 12:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.400 12:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.400 12:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.400 12:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.401 12:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.401 12:57:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.401 12:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.401 12:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.401 12:57:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.401 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.401 "name": "raid_bdev1", 00:13:46.401 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:46.401 "strip_size_kb": 64, 00:13:46.401 "state": "online", 00:13:46.401 "raid_level": "raid5f", 00:13:46.401 "superblock": false, 00:13:46.401 "num_base_bdevs": 3, 00:13:46.401 "num_base_bdevs_discovered": 3, 00:13:46.401 "num_base_bdevs_operational": 3, 00:13:46.401 "process": { 00:13:46.401 "type": "rebuild", 00:13:46.401 "target": "spare", 00:13:46.401 "progress": { 00:13:46.401 "blocks": 20480, 00:13:46.401 "percent": 15 00:13:46.401 } 00:13:46.401 }, 00:13:46.401 "base_bdevs_list": [ 00:13:46.401 { 00:13:46.401 "name": "spare", 00:13:46.401 "uuid": "ddf7fc8f-0471-517e-b70f-af0414f303bc", 00:13:46.401 "is_configured": true, 00:13:46.401 "data_offset": 0, 00:13:46.401 "data_size": 65536 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "name": "BaseBdev2", 00:13:46.401 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:46.401 "is_configured": true, 00:13:46.401 "data_offset": 0, 00:13:46.401 "data_size": 65536 00:13:46.401 }, 00:13:46.401 { 00:13:46.401 "name": "BaseBdev3", 00:13:46.401 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:46.401 "is_configured": true, 00:13:46.401 "data_offset": 0, 00:13:46.401 "data_size": 65536 00:13:46.401 } 00:13:46.401 ] 00:13:46.401 }' 00:13:46.401 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.401 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.401 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.660 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.660 "name": "raid_bdev1", 00:13:46.660 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:46.660 "strip_size_kb": 64, 00:13:46.660 "state": "online", 00:13:46.660 "raid_level": "raid5f", 00:13:46.660 "superblock": false, 00:13:46.660 "num_base_bdevs": 3, 00:13:46.660 "num_base_bdevs_discovered": 3, 00:13:46.661 "num_base_bdevs_operational": 3, 00:13:46.661 "process": { 00:13:46.661 "type": "rebuild", 00:13:46.661 "target": "spare", 00:13:46.661 "progress": { 00:13:46.661 "blocks": 22528, 00:13:46.661 "percent": 17 00:13:46.661 } 00:13:46.661 }, 00:13:46.661 "base_bdevs_list": [ 00:13:46.661 { 00:13:46.661 "name": "spare", 00:13:46.661 "uuid": "ddf7fc8f-0471-517e-b70f-af0414f303bc", 00:13:46.661 "is_configured": true, 00:13:46.661 "data_offset": 0, 00:13:46.661 "data_size": 65536 00:13:46.661 }, 00:13:46.661 { 00:13:46.661 "name": "BaseBdev2", 00:13:46.661 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:46.661 "is_configured": true, 00:13:46.661 "data_offset": 0, 00:13:46.661 "data_size": 65536 00:13:46.661 }, 00:13:46.661 { 00:13:46.661 "name": "BaseBdev3", 00:13:46.661 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:46.661 "is_configured": true, 00:13:46.661 "data_offset": 0, 00:13:46.661 "data_size": 65536 00:13:46.661 } 00:13:46.661 ] 00:13:46.661 }' 00:13:46.661 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.661 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.661 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.661 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.661 12:57:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.599 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.858 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.858 "name": "raid_bdev1", 00:13:47.858 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:47.858 "strip_size_kb": 64, 00:13:47.858 "state": "online", 00:13:47.858 "raid_level": "raid5f", 00:13:47.858 "superblock": false, 00:13:47.858 "num_base_bdevs": 3, 00:13:47.858 "num_base_bdevs_discovered": 3, 00:13:47.858 "num_base_bdevs_operational": 3, 00:13:47.858 "process": { 00:13:47.858 "type": "rebuild", 00:13:47.858 "target": "spare", 00:13:47.859 "progress": { 00:13:47.859 "blocks": 45056, 00:13:47.859 "percent": 34 00:13:47.859 } 00:13:47.859 }, 00:13:47.859 "base_bdevs_list": [ 00:13:47.859 { 00:13:47.859 "name": "spare", 00:13:47.859 "uuid": "ddf7fc8f-0471-517e-b70f-af0414f303bc", 00:13:47.859 "is_configured": true, 00:13:47.859 "data_offset": 0, 00:13:47.859 "data_size": 65536 00:13:47.859 }, 00:13:47.859 { 00:13:47.859 "name": "BaseBdev2", 00:13:47.859 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:47.859 "is_configured": true, 00:13:47.859 "data_offset": 0, 00:13:47.859 "data_size": 65536 00:13:47.859 }, 00:13:47.859 { 00:13:47.859 "name": "BaseBdev3", 00:13:47.859 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:47.859 "is_configured": true, 00:13:47.859 "data_offset": 0, 00:13:47.859 "data_size": 65536 00:13:47.859 } 00:13:47.859 ] 00:13:47.859 }' 00:13:47.859 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.859 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.859 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.859 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.859 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.798 "name": "raid_bdev1", 00:13:48.798 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:48.798 "strip_size_kb": 64, 00:13:48.798 "state": "online", 00:13:48.798 "raid_level": "raid5f", 00:13:48.798 "superblock": false, 00:13:48.798 "num_base_bdevs": 3, 00:13:48.798 "num_base_bdevs_discovered": 3, 00:13:48.798 "num_base_bdevs_operational": 3, 00:13:48.798 "process": { 00:13:48.798 "type": "rebuild", 00:13:48.798 "target": "spare", 00:13:48.798 "progress": { 00:13:48.798 "blocks": 69632, 00:13:48.798 "percent": 53 00:13:48.798 } 00:13:48.798 }, 00:13:48.798 "base_bdevs_list": [ 00:13:48.798 { 00:13:48.798 "name": "spare", 00:13:48.798 "uuid": "ddf7fc8f-0471-517e-b70f-af0414f303bc", 00:13:48.798 "is_configured": true, 00:13:48.798 "data_offset": 0, 00:13:48.798 "data_size": 65536 00:13:48.798 }, 00:13:48.798 { 00:13:48.798 "name": "BaseBdev2", 00:13:48.798 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:48.798 "is_configured": true, 00:13:48.798 "data_offset": 0, 00:13:48.798 "data_size": 65536 00:13:48.798 }, 00:13:48.798 { 00:13:48.798 "name": "BaseBdev3", 00:13:48.798 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:48.798 "is_configured": true, 00:13:48.798 "data_offset": 0, 00:13:48.798 "data_size": 65536 00:13:48.798 } 00:13:48.798 ] 00:13:48.798 }' 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.798 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.065 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.065 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.065 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.040 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.040 "name": "raid_bdev1", 00:13:50.040 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:50.040 "strip_size_kb": 64, 00:13:50.040 "state": "online", 00:13:50.040 "raid_level": "raid5f", 00:13:50.040 "superblock": false, 00:13:50.040 "num_base_bdevs": 3, 00:13:50.040 "num_base_bdevs_discovered": 3, 00:13:50.040 "num_base_bdevs_operational": 3, 00:13:50.040 "process": { 00:13:50.040 "type": "rebuild", 00:13:50.040 "target": "spare", 00:13:50.040 "progress": { 00:13:50.040 "blocks": 92160, 00:13:50.040 "percent": 70 00:13:50.040 } 00:13:50.040 }, 00:13:50.040 "base_bdevs_list": [ 00:13:50.040 { 00:13:50.040 "name": "spare", 00:13:50.040 "uuid": "ddf7fc8f-0471-517e-b70f-af0414f303bc", 00:13:50.040 "is_configured": true, 00:13:50.040 "data_offset": 0, 00:13:50.040 "data_size": 65536 00:13:50.040 }, 00:13:50.040 { 00:13:50.040 "name": "BaseBdev2", 00:13:50.040 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:50.040 "is_configured": true, 00:13:50.040 "data_offset": 0, 00:13:50.040 "data_size": 65536 00:13:50.040 }, 00:13:50.040 { 00:13:50.040 "name": "BaseBdev3", 00:13:50.040 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:50.040 "is_configured": true, 00:13:50.040 "data_offset": 0, 00:13:50.040 "data_size": 65536 00:13:50.040 } 00:13:50.040 ] 00:13:50.040 }' 00:13:50.041 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.041 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.041 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.041 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.041 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.423 "name": "raid_bdev1", 00:13:51.423 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:51.423 "strip_size_kb": 64, 00:13:51.423 "state": "online", 00:13:51.423 "raid_level": "raid5f", 00:13:51.423 "superblock": false, 00:13:51.423 "num_base_bdevs": 3, 00:13:51.423 "num_base_bdevs_discovered": 3, 00:13:51.423 "num_base_bdevs_operational": 3, 00:13:51.423 "process": { 00:13:51.423 "type": "rebuild", 00:13:51.423 "target": "spare", 00:13:51.423 "progress": { 00:13:51.423 "blocks": 114688, 00:13:51.423 "percent": 87 00:13:51.423 } 00:13:51.423 }, 00:13:51.423 "base_bdevs_list": [ 00:13:51.423 { 00:13:51.423 "name": "spare", 00:13:51.423 "uuid": "ddf7fc8f-0471-517e-b70f-af0414f303bc", 00:13:51.423 "is_configured": true, 00:13:51.423 "data_offset": 0, 00:13:51.423 "data_size": 65536 00:13:51.423 }, 00:13:51.423 { 00:13:51.423 "name": "BaseBdev2", 00:13:51.423 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:51.423 "is_configured": true, 00:13:51.423 "data_offset": 0, 00:13:51.423 "data_size": 65536 00:13:51.423 }, 00:13:51.423 { 00:13:51.423 "name": "BaseBdev3", 00:13:51.423 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:51.423 "is_configured": true, 00:13:51.423 "data_offset": 0, 00:13:51.423 "data_size": 65536 00:13:51.423 } 00:13:51.423 ] 00:13:51.423 }' 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.423 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.995 [2024-11-26 12:57:09.394149] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:51.995 [2024-11-26 12:57:09.394292] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:51.995 [2024-11-26 12:57:09.394374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.254 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.255 "name": "raid_bdev1", 00:13:52.255 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:52.255 "strip_size_kb": 64, 00:13:52.255 "state": "online", 00:13:52.255 "raid_level": "raid5f", 00:13:52.255 "superblock": false, 00:13:52.255 "num_base_bdevs": 3, 00:13:52.255 "num_base_bdevs_discovered": 3, 00:13:52.255 "num_base_bdevs_operational": 3, 00:13:52.255 "base_bdevs_list": [ 00:13:52.255 { 00:13:52.255 "name": "spare", 00:13:52.255 "uuid": "ddf7fc8f-0471-517e-b70f-af0414f303bc", 00:13:52.255 "is_configured": true, 00:13:52.255 "data_offset": 0, 00:13:52.255 "data_size": 65536 00:13:52.255 }, 00:13:52.255 { 00:13:52.255 "name": "BaseBdev2", 00:13:52.255 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:52.255 "is_configured": true, 00:13:52.255 "data_offset": 0, 00:13:52.255 "data_size": 65536 00:13:52.255 }, 00:13:52.255 { 00:13:52.255 "name": "BaseBdev3", 00:13:52.255 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:52.255 "is_configured": true, 00:13:52.255 "data_offset": 0, 00:13:52.255 "data_size": 65536 00:13:52.255 } 00:13:52.255 ] 00:13:52.255 }' 00:13:52.255 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.255 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:52.255 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.514 "name": "raid_bdev1", 00:13:52.514 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:52.514 "strip_size_kb": 64, 00:13:52.514 "state": "online", 00:13:52.514 "raid_level": "raid5f", 00:13:52.514 "superblock": false, 00:13:52.514 "num_base_bdevs": 3, 00:13:52.514 "num_base_bdevs_discovered": 3, 00:13:52.514 "num_base_bdevs_operational": 3, 00:13:52.514 "base_bdevs_list": [ 00:13:52.514 { 00:13:52.514 "name": "spare", 00:13:52.514 "uuid": "ddf7fc8f-0471-517e-b70f-af0414f303bc", 00:13:52.514 "is_configured": true, 00:13:52.514 "data_offset": 0, 00:13:52.514 "data_size": 65536 00:13:52.514 }, 00:13:52.514 { 00:13:52.514 "name": "BaseBdev2", 00:13:52.514 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:52.514 "is_configured": true, 00:13:52.514 "data_offset": 0, 00:13:52.514 "data_size": 65536 00:13:52.514 }, 00:13:52.514 { 00:13:52.514 "name": "BaseBdev3", 00:13:52.514 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:52.514 "is_configured": true, 00:13:52.514 "data_offset": 0, 00:13:52.514 "data_size": 65536 00:13:52.514 } 00:13:52.514 ] 00:13:52.514 }' 00:13:52.514 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.514 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.514 "name": "raid_bdev1", 00:13:52.514 "uuid": "6d8d59be-f2b7-421e-849a-f18c73f1d2b8", 00:13:52.514 "strip_size_kb": 64, 00:13:52.514 "state": "online", 00:13:52.514 "raid_level": "raid5f", 00:13:52.514 "superblock": false, 00:13:52.515 "num_base_bdevs": 3, 00:13:52.515 "num_base_bdevs_discovered": 3, 00:13:52.515 "num_base_bdevs_operational": 3, 00:13:52.515 "base_bdevs_list": [ 00:13:52.515 { 00:13:52.515 "name": "spare", 00:13:52.515 "uuid": "ddf7fc8f-0471-517e-b70f-af0414f303bc", 00:13:52.515 "is_configured": true, 00:13:52.515 "data_offset": 0, 00:13:52.515 "data_size": 65536 00:13:52.515 }, 00:13:52.515 { 00:13:52.515 "name": "BaseBdev2", 00:13:52.515 "uuid": "c49aee7f-986b-58c6-9cba-bb7f3abae3e2", 00:13:52.515 "is_configured": true, 00:13:52.515 "data_offset": 0, 00:13:52.515 "data_size": 65536 00:13:52.515 }, 00:13:52.515 { 00:13:52.515 "name": "BaseBdev3", 00:13:52.515 "uuid": "79ec4199-522f-554c-8921-f964321ac644", 00:13:52.515 "is_configured": true, 00:13:52.515 "data_offset": 0, 00:13:52.515 "data_size": 65536 00:13:52.515 } 00:13:52.515 ] 00:13:52.515 }' 00:13:52.515 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.515 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.083 [2024-11-26 12:57:10.573311] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.083 [2024-11-26 12:57:10.573406] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.083 [2024-11-26 12:57:10.573518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.083 [2024-11-26 12:57:10.573622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.083 [2024-11-26 12:57:10.573633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:53.083 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:53.343 /dev/nbd0 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.343 1+0 records in 00:13:53.343 1+0 records out 00:13:53.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317955 s, 12.9 MB/s 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:53.343 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:53.603 /dev/nbd1 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.603 1+0 records in 00:13:53.603 1+0 records out 00:13:53.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419863 s, 9.8 MB/s 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.603 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:53.863 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:53.863 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:53.863 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:53.863 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.863 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.863 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:53.863 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:53.863 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.863 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.863 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92296 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92296 ']' 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92296 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92296 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:54.123 killing process with pid 92296 00:13:54.123 Received shutdown signal, test time was about 60.000000 seconds 00:13:54.123 00:13:54.123 Latency(us) 00:13:54.123 [2024-11-26T12:57:11.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.123 [2024-11-26T12:57:11.807Z] =================================================================================================================== 00:13:54.123 [2024-11-26T12:57:11.807Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92296' 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92296 00:13:54.123 [2024-11-26 12:57:11.666507] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.123 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92296 00:13:54.123 [2024-11-26 12:57:11.707656] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.384 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:54.384 00:13:54.384 real 0m13.576s 00:13:54.384 user 0m16.696s 00:13:54.384 sys 0m2.119s 00:13:54.384 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.384 ************************************ 00:13:54.384 END TEST raid5f_rebuild_test 00:13:54.384 ************************************ 00:13:54.384 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.384 12:57:12 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:54.384 12:57:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:54.384 12:57:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.384 12:57:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.384 ************************************ 00:13:54.384 START TEST raid5f_rebuild_test_sb 00:13:54.384 ************************************ 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92725 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92725 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92725 ']' 00:13:54.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.384 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.644 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:54.644 Zero copy mechanism will not be used. 00:13:54.644 [2024-11-26 12:57:12.124917] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:54.644 [2024-11-26 12:57:12.125047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92725 ] 00:13:54.644 [2024-11-26 12:57:12.292719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.904 [2024-11-26 12:57:12.344346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.904 [2024-11-26 12:57:12.387992] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.904 [2024-11-26 12:57:12.388028] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.474 BaseBdev1_malloc 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.474 [2024-11-26 12:57:12.950250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:55.474 [2024-11-26 12:57:12.950323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.474 [2024-11-26 12:57:12.950345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:55.474 [2024-11-26 12:57:12.950358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.474 [2024-11-26 12:57:12.952333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.474 [2024-11-26 12:57:12.952455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.474 BaseBdev1 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.474 BaseBdev2_malloc 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.474 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.474 [2024-11-26 12:57:12.991945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:55.474 [2024-11-26 12:57:12.992125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.474 [2024-11-26 12:57:12.992159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:55.474 [2024-11-26 12:57:12.992191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.474 [2024-11-26 12:57:12.995023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.475 [2024-11-26 12:57:12.995063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:55.475 BaseBdev2 00:13:55.475 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.475 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.475 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:55.475 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.475 12:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.475 BaseBdev3_malloc 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.475 [2024-11-26 12:57:13.020607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:55.475 [2024-11-26 12:57:13.020656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.475 [2024-11-26 12:57:13.020678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:55.475 [2024-11-26 12:57:13.020687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.475 [2024-11-26 12:57:13.022641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.475 [2024-11-26 12:57:13.022738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:55.475 BaseBdev3 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.475 spare_malloc 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.475 spare_delay 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.475 [2024-11-26 12:57:13.061157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:55.475 [2024-11-26 12:57:13.061221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.475 [2024-11-26 12:57:13.061246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:55.475 [2024-11-26 12:57:13.061254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.475 [2024-11-26 12:57:13.063255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.475 [2024-11-26 12:57:13.063290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:55.475 spare 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.475 [2024-11-26 12:57:13.073224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.475 [2024-11-26 12:57:13.074908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.475 [2024-11-26 12:57:13.074976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.475 [2024-11-26 12:57:13.075119] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:55.475 [2024-11-26 12:57:13.075132] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:55.475 [2024-11-26 12:57:13.075407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:55.475 [2024-11-26 12:57:13.075805] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:55.475 [2024-11-26 12:57:13.075830] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:55.475 [2024-11-26 12:57:13.075957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.475 "name": "raid_bdev1", 00:13:55.475 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:13:55.475 "strip_size_kb": 64, 00:13:55.475 "state": "online", 00:13:55.475 "raid_level": "raid5f", 00:13:55.475 "superblock": true, 00:13:55.475 "num_base_bdevs": 3, 00:13:55.475 "num_base_bdevs_discovered": 3, 00:13:55.475 "num_base_bdevs_operational": 3, 00:13:55.475 "base_bdevs_list": [ 00:13:55.475 { 00:13:55.475 "name": "BaseBdev1", 00:13:55.475 "uuid": "745a46b3-ec34-5fe9-8edb-eb242503cc2e", 00:13:55.475 "is_configured": true, 00:13:55.475 "data_offset": 2048, 00:13:55.475 "data_size": 63488 00:13:55.475 }, 00:13:55.475 { 00:13:55.475 "name": "BaseBdev2", 00:13:55.475 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:13:55.475 "is_configured": true, 00:13:55.475 "data_offset": 2048, 00:13:55.475 "data_size": 63488 00:13:55.475 }, 00:13:55.475 { 00:13:55.475 "name": "BaseBdev3", 00:13:55.475 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:13:55.475 "is_configured": true, 00:13:55.475 "data_offset": 2048, 00:13:55.475 "data_size": 63488 00:13:55.475 } 00:13:55.475 ] 00:13:55.475 }' 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.475 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.044 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.044 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.044 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.044 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:56.044 [2024-11-26 12:57:13.532747] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.044 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.044 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:56.044 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.045 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:56.305 [2024-11-26 12:57:13.808176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:56.305 /dev/nbd0 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.305 1+0 records in 00:13:56.305 1+0 records out 00:13:56.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525442 s, 7.8 MB/s 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:56.305 12:57:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:56.566 496+0 records in 00:13:56.566 496+0 records out 00:13:56.566 65011712 bytes (65 MB, 62 MiB) copied, 0.301514 s, 216 MB/s 00:13:56.566 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:56.566 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.566 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:56.566 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:56.566 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:56.566 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.566 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:56.826 [2024-11-26 12:57:14.401271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.826 [2024-11-26 12:57:14.428946] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.826 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.827 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.827 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.827 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.827 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.827 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.827 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.827 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.827 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.827 "name": "raid_bdev1", 00:13:56.827 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:13:56.827 "strip_size_kb": 64, 00:13:56.827 "state": "online", 00:13:56.827 "raid_level": "raid5f", 00:13:56.827 "superblock": true, 00:13:56.827 "num_base_bdevs": 3, 00:13:56.827 "num_base_bdevs_discovered": 2, 00:13:56.827 "num_base_bdevs_operational": 2, 00:13:56.827 "base_bdevs_list": [ 00:13:56.827 { 00:13:56.827 "name": null, 00:13:56.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.827 "is_configured": false, 00:13:56.827 "data_offset": 0, 00:13:56.827 "data_size": 63488 00:13:56.827 }, 00:13:56.827 { 00:13:56.827 "name": "BaseBdev2", 00:13:56.827 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:13:56.827 "is_configured": true, 00:13:56.827 "data_offset": 2048, 00:13:56.827 "data_size": 63488 00:13:56.827 }, 00:13:56.827 { 00:13:56.827 "name": "BaseBdev3", 00:13:56.827 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:13:56.827 "is_configured": true, 00:13:56.827 "data_offset": 2048, 00:13:56.827 "data_size": 63488 00:13:56.827 } 00:13:56.827 ] 00:13:56.827 }' 00:13:56.827 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.827 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.397 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:57.397 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.397 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.397 [2024-11-26 12:57:14.836269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.397 [2024-11-26 12:57:14.840109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:13:57.397 [2024-11-26 12:57:14.842258] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.397 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.397 12:57:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.338 "name": "raid_bdev1", 00:13:58.338 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:13:58.338 "strip_size_kb": 64, 00:13:58.338 "state": "online", 00:13:58.338 "raid_level": "raid5f", 00:13:58.338 "superblock": true, 00:13:58.338 "num_base_bdevs": 3, 00:13:58.338 "num_base_bdevs_discovered": 3, 00:13:58.338 "num_base_bdevs_operational": 3, 00:13:58.338 "process": { 00:13:58.338 "type": "rebuild", 00:13:58.338 "target": "spare", 00:13:58.338 "progress": { 00:13:58.338 "blocks": 20480, 00:13:58.338 "percent": 16 00:13:58.338 } 00:13:58.338 }, 00:13:58.338 "base_bdevs_list": [ 00:13:58.338 { 00:13:58.338 "name": "spare", 00:13:58.338 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:13:58.338 "is_configured": true, 00:13:58.338 "data_offset": 2048, 00:13:58.338 "data_size": 63488 00:13:58.338 }, 00:13:58.338 { 00:13:58.338 "name": "BaseBdev2", 00:13:58.338 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:13:58.338 "is_configured": true, 00:13:58.338 "data_offset": 2048, 00:13:58.338 "data_size": 63488 00:13:58.338 }, 00:13:58.338 { 00:13:58.338 "name": "BaseBdev3", 00:13:58.338 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:13:58.338 "is_configured": true, 00:13:58.338 "data_offset": 2048, 00:13:58.338 "data_size": 63488 00:13:58.338 } 00:13:58.338 ] 00:13:58.338 }' 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.338 12:57:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.338 [2024-11-26 12:57:15.993244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.598 [2024-11-26 12:57:16.049348] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:58.598 [2024-11-26 12:57:16.049409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.598 [2024-11-26 12:57:16.049423] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.598 [2024-11-26 12:57:16.049439] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.598 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.598 "name": "raid_bdev1", 00:13:58.598 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:13:58.598 "strip_size_kb": 64, 00:13:58.598 "state": "online", 00:13:58.598 "raid_level": "raid5f", 00:13:58.598 "superblock": true, 00:13:58.599 "num_base_bdevs": 3, 00:13:58.599 "num_base_bdevs_discovered": 2, 00:13:58.599 "num_base_bdevs_operational": 2, 00:13:58.599 "base_bdevs_list": [ 00:13:58.599 { 00:13:58.599 "name": null, 00:13:58.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.599 "is_configured": false, 00:13:58.599 "data_offset": 0, 00:13:58.599 "data_size": 63488 00:13:58.599 }, 00:13:58.599 { 00:13:58.599 "name": "BaseBdev2", 00:13:58.599 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:13:58.599 "is_configured": true, 00:13:58.599 "data_offset": 2048, 00:13:58.599 "data_size": 63488 00:13:58.599 }, 00:13:58.599 { 00:13:58.599 "name": "BaseBdev3", 00:13:58.599 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:13:58.599 "is_configured": true, 00:13:58.599 "data_offset": 2048, 00:13:58.599 "data_size": 63488 00:13:58.599 } 00:13:58.599 ] 00:13:58.599 }' 00:13:58.599 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.599 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.858 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.858 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.858 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.858 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.858 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.858 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.858 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.858 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.858 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.858 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.119 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.119 "name": "raid_bdev1", 00:13:59.119 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:13:59.119 "strip_size_kb": 64, 00:13:59.119 "state": "online", 00:13:59.119 "raid_level": "raid5f", 00:13:59.119 "superblock": true, 00:13:59.119 "num_base_bdevs": 3, 00:13:59.119 "num_base_bdevs_discovered": 2, 00:13:59.119 "num_base_bdevs_operational": 2, 00:13:59.119 "base_bdevs_list": [ 00:13:59.119 { 00:13:59.119 "name": null, 00:13:59.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.119 "is_configured": false, 00:13:59.119 "data_offset": 0, 00:13:59.119 "data_size": 63488 00:13:59.119 }, 00:13:59.119 { 00:13:59.119 "name": "BaseBdev2", 00:13:59.119 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:13:59.119 "is_configured": true, 00:13:59.119 "data_offset": 2048, 00:13:59.119 "data_size": 63488 00:13:59.119 }, 00:13:59.119 { 00:13:59.119 "name": "BaseBdev3", 00:13:59.119 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:13:59.119 "is_configured": true, 00:13:59.119 "data_offset": 2048, 00:13:59.119 "data_size": 63488 00:13:59.119 } 00:13:59.119 ] 00:13:59.119 }' 00:13:59.119 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.119 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.119 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.119 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.119 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.119 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.119 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.119 [2024-11-26 12:57:16.625756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.119 [2024-11-26 12:57:16.628663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:13:59.119 [2024-11-26 12:57:16.630696] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.119 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.119 12:57:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:00.058 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.058 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.058 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.058 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.058 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.058 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.058 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.058 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.058 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.058 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.059 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.059 "name": "raid_bdev1", 00:14:00.059 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:00.059 "strip_size_kb": 64, 00:14:00.059 "state": "online", 00:14:00.059 "raid_level": "raid5f", 00:14:00.059 "superblock": true, 00:14:00.059 "num_base_bdevs": 3, 00:14:00.059 "num_base_bdevs_discovered": 3, 00:14:00.059 "num_base_bdevs_operational": 3, 00:14:00.059 "process": { 00:14:00.059 "type": "rebuild", 00:14:00.059 "target": "spare", 00:14:00.059 "progress": { 00:14:00.059 "blocks": 20480, 00:14:00.059 "percent": 16 00:14:00.059 } 00:14:00.059 }, 00:14:00.059 "base_bdevs_list": [ 00:14:00.059 { 00:14:00.059 "name": "spare", 00:14:00.059 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:00.059 "is_configured": true, 00:14:00.059 "data_offset": 2048, 00:14:00.059 "data_size": 63488 00:14:00.059 }, 00:14:00.059 { 00:14:00.059 "name": "BaseBdev2", 00:14:00.059 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:00.059 "is_configured": true, 00:14:00.059 "data_offset": 2048, 00:14:00.059 "data_size": 63488 00:14:00.059 }, 00:14:00.059 { 00:14:00.059 "name": "BaseBdev3", 00:14:00.059 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:00.059 "is_configured": true, 00:14:00.059 "data_offset": 2048, 00:14:00.059 "data_size": 63488 00:14:00.059 } 00:14:00.059 ] 00:14:00.059 }' 00:14:00.059 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.059 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.059 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:00.319 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=461 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.319 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.319 "name": "raid_bdev1", 00:14:00.319 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:00.319 "strip_size_kb": 64, 00:14:00.319 "state": "online", 00:14:00.319 "raid_level": "raid5f", 00:14:00.319 "superblock": true, 00:14:00.319 "num_base_bdevs": 3, 00:14:00.319 "num_base_bdevs_discovered": 3, 00:14:00.319 "num_base_bdevs_operational": 3, 00:14:00.319 "process": { 00:14:00.319 "type": "rebuild", 00:14:00.319 "target": "spare", 00:14:00.319 "progress": { 00:14:00.319 "blocks": 22528, 00:14:00.319 "percent": 17 00:14:00.319 } 00:14:00.319 }, 00:14:00.319 "base_bdevs_list": [ 00:14:00.319 { 00:14:00.319 "name": "spare", 00:14:00.319 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:00.319 "is_configured": true, 00:14:00.319 "data_offset": 2048, 00:14:00.320 "data_size": 63488 00:14:00.320 }, 00:14:00.320 { 00:14:00.320 "name": "BaseBdev2", 00:14:00.320 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:00.320 "is_configured": true, 00:14:00.320 "data_offset": 2048, 00:14:00.320 "data_size": 63488 00:14:00.320 }, 00:14:00.320 { 00:14:00.320 "name": "BaseBdev3", 00:14:00.320 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:00.320 "is_configured": true, 00:14:00.320 "data_offset": 2048, 00:14:00.320 "data_size": 63488 00:14:00.320 } 00:14:00.320 ] 00:14:00.320 }' 00:14:00.320 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.320 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.320 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.320 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.320 12:57:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.255 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.255 "name": "raid_bdev1", 00:14:01.255 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:01.255 "strip_size_kb": 64, 00:14:01.255 "state": "online", 00:14:01.255 "raid_level": "raid5f", 00:14:01.255 "superblock": true, 00:14:01.255 "num_base_bdevs": 3, 00:14:01.255 "num_base_bdevs_discovered": 3, 00:14:01.255 "num_base_bdevs_operational": 3, 00:14:01.255 "process": { 00:14:01.255 "type": "rebuild", 00:14:01.255 "target": "spare", 00:14:01.255 "progress": { 00:14:01.255 "blocks": 45056, 00:14:01.255 "percent": 35 00:14:01.255 } 00:14:01.255 }, 00:14:01.255 "base_bdevs_list": [ 00:14:01.255 { 00:14:01.255 "name": "spare", 00:14:01.255 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:01.255 "is_configured": true, 00:14:01.255 "data_offset": 2048, 00:14:01.255 "data_size": 63488 00:14:01.255 }, 00:14:01.255 { 00:14:01.255 "name": "BaseBdev2", 00:14:01.255 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:01.255 "is_configured": true, 00:14:01.255 "data_offset": 2048, 00:14:01.255 "data_size": 63488 00:14:01.255 }, 00:14:01.255 { 00:14:01.255 "name": "BaseBdev3", 00:14:01.256 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:01.256 "is_configured": true, 00:14:01.256 "data_offset": 2048, 00:14:01.256 "data_size": 63488 00:14:01.256 } 00:14:01.256 ] 00:14:01.256 }' 00:14:01.256 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.515 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.515 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.515 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.515 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.454 "name": "raid_bdev1", 00:14:02.454 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:02.454 "strip_size_kb": 64, 00:14:02.454 "state": "online", 00:14:02.454 "raid_level": "raid5f", 00:14:02.454 "superblock": true, 00:14:02.454 "num_base_bdevs": 3, 00:14:02.454 "num_base_bdevs_discovered": 3, 00:14:02.454 "num_base_bdevs_operational": 3, 00:14:02.454 "process": { 00:14:02.454 "type": "rebuild", 00:14:02.454 "target": "spare", 00:14:02.454 "progress": { 00:14:02.454 "blocks": 67584, 00:14:02.454 "percent": 53 00:14:02.454 } 00:14:02.454 }, 00:14:02.454 "base_bdevs_list": [ 00:14:02.454 { 00:14:02.454 "name": "spare", 00:14:02.454 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:02.454 "is_configured": true, 00:14:02.454 "data_offset": 2048, 00:14:02.454 "data_size": 63488 00:14:02.454 }, 00:14:02.454 { 00:14:02.454 "name": "BaseBdev2", 00:14:02.454 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:02.454 "is_configured": true, 00:14:02.454 "data_offset": 2048, 00:14:02.454 "data_size": 63488 00:14:02.454 }, 00:14:02.454 { 00:14:02.454 "name": "BaseBdev3", 00:14:02.454 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:02.454 "is_configured": true, 00:14:02.454 "data_offset": 2048, 00:14:02.454 "data_size": 63488 00:14:02.454 } 00:14:02.454 ] 00:14:02.454 }' 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.454 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.714 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.714 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.655 "name": "raid_bdev1", 00:14:03.655 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:03.655 "strip_size_kb": 64, 00:14:03.655 "state": "online", 00:14:03.655 "raid_level": "raid5f", 00:14:03.655 "superblock": true, 00:14:03.655 "num_base_bdevs": 3, 00:14:03.655 "num_base_bdevs_discovered": 3, 00:14:03.655 "num_base_bdevs_operational": 3, 00:14:03.655 "process": { 00:14:03.655 "type": "rebuild", 00:14:03.655 "target": "spare", 00:14:03.655 "progress": { 00:14:03.655 "blocks": 92160, 00:14:03.655 "percent": 72 00:14:03.655 } 00:14:03.655 }, 00:14:03.655 "base_bdevs_list": [ 00:14:03.655 { 00:14:03.655 "name": "spare", 00:14:03.655 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:03.655 "is_configured": true, 00:14:03.655 "data_offset": 2048, 00:14:03.655 "data_size": 63488 00:14:03.655 }, 00:14:03.655 { 00:14:03.655 "name": "BaseBdev2", 00:14:03.655 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:03.655 "is_configured": true, 00:14:03.655 "data_offset": 2048, 00:14:03.655 "data_size": 63488 00:14:03.655 }, 00:14:03.655 { 00:14:03.655 "name": "BaseBdev3", 00:14:03.655 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:03.655 "is_configured": true, 00:14:03.655 "data_offset": 2048, 00:14:03.655 "data_size": 63488 00:14:03.655 } 00:14:03.655 ] 00:14:03.655 }' 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.655 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.035 "name": "raid_bdev1", 00:14:05.035 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:05.035 "strip_size_kb": 64, 00:14:05.035 "state": "online", 00:14:05.035 "raid_level": "raid5f", 00:14:05.035 "superblock": true, 00:14:05.035 "num_base_bdevs": 3, 00:14:05.035 "num_base_bdevs_discovered": 3, 00:14:05.035 "num_base_bdevs_operational": 3, 00:14:05.035 "process": { 00:14:05.035 "type": "rebuild", 00:14:05.035 "target": "spare", 00:14:05.035 "progress": { 00:14:05.035 "blocks": 114688, 00:14:05.035 "percent": 90 00:14:05.035 } 00:14:05.035 }, 00:14:05.035 "base_bdevs_list": [ 00:14:05.035 { 00:14:05.035 "name": "spare", 00:14:05.035 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:05.035 "is_configured": true, 00:14:05.035 "data_offset": 2048, 00:14:05.035 "data_size": 63488 00:14:05.035 }, 00:14:05.035 { 00:14:05.035 "name": "BaseBdev2", 00:14:05.035 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:05.035 "is_configured": true, 00:14:05.035 "data_offset": 2048, 00:14:05.035 "data_size": 63488 00:14:05.035 }, 00:14:05.035 { 00:14:05.035 "name": "BaseBdev3", 00:14:05.035 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:05.035 "is_configured": true, 00:14:05.035 "data_offset": 2048, 00:14:05.035 "data_size": 63488 00:14:05.035 } 00:14:05.035 ] 00:14:05.035 }' 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.035 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.295 [2024-11-26 12:57:22.863208] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:05.295 [2024-11-26 12:57:22.863315] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:05.295 [2024-11-26 12:57:22.863454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.865 "name": "raid_bdev1", 00:14:05.865 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:05.865 "strip_size_kb": 64, 00:14:05.865 "state": "online", 00:14:05.865 "raid_level": "raid5f", 00:14:05.865 "superblock": true, 00:14:05.865 "num_base_bdevs": 3, 00:14:05.865 "num_base_bdevs_discovered": 3, 00:14:05.865 "num_base_bdevs_operational": 3, 00:14:05.865 "base_bdevs_list": [ 00:14:05.865 { 00:14:05.865 "name": "spare", 00:14:05.865 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:05.865 "is_configured": true, 00:14:05.865 "data_offset": 2048, 00:14:05.865 "data_size": 63488 00:14:05.865 }, 00:14:05.865 { 00:14:05.865 "name": "BaseBdev2", 00:14:05.865 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:05.865 "is_configured": true, 00:14:05.865 "data_offset": 2048, 00:14:05.865 "data_size": 63488 00:14:05.865 }, 00:14:05.865 { 00:14:05.865 "name": "BaseBdev3", 00:14:05.865 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:05.865 "is_configured": true, 00:14:05.865 "data_offset": 2048, 00:14:05.865 "data_size": 63488 00:14:05.865 } 00:14:05.865 ] 00:14:05.865 }' 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:05.865 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.125 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.125 "name": "raid_bdev1", 00:14:06.125 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:06.125 "strip_size_kb": 64, 00:14:06.125 "state": "online", 00:14:06.125 "raid_level": "raid5f", 00:14:06.125 "superblock": true, 00:14:06.125 "num_base_bdevs": 3, 00:14:06.125 "num_base_bdevs_discovered": 3, 00:14:06.125 "num_base_bdevs_operational": 3, 00:14:06.125 "base_bdevs_list": [ 00:14:06.125 { 00:14:06.125 "name": "spare", 00:14:06.125 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:06.125 "is_configured": true, 00:14:06.125 "data_offset": 2048, 00:14:06.125 "data_size": 63488 00:14:06.125 }, 00:14:06.125 { 00:14:06.125 "name": "BaseBdev2", 00:14:06.125 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:06.125 "is_configured": true, 00:14:06.125 "data_offset": 2048, 00:14:06.125 "data_size": 63488 00:14:06.125 }, 00:14:06.125 { 00:14:06.125 "name": "BaseBdev3", 00:14:06.125 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:06.125 "is_configured": true, 00:14:06.125 "data_offset": 2048, 00:14:06.125 "data_size": 63488 00:14:06.125 } 00:14:06.125 ] 00:14:06.125 }' 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.126 "name": "raid_bdev1", 00:14:06.126 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:06.126 "strip_size_kb": 64, 00:14:06.126 "state": "online", 00:14:06.126 "raid_level": "raid5f", 00:14:06.126 "superblock": true, 00:14:06.126 "num_base_bdevs": 3, 00:14:06.126 "num_base_bdevs_discovered": 3, 00:14:06.126 "num_base_bdevs_operational": 3, 00:14:06.126 "base_bdevs_list": [ 00:14:06.126 { 00:14:06.126 "name": "spare", 00:14:06.126 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:06.126 "is_configured": true, 00:14:06.126 "data_offset": 2048, 00:14:06.126 "data_size": 63488 00:14:06.126 }, 00:14:06.126 { 00:14:06.126 "name": "BaseBdev2", 00:14:06.126 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:06.126 "is_configured": true, 00:14:06.126 "data_offset": 2048, 00:14:06.126 "data_size": 63488 00:14:06.126 }, 00:14:06.126 { 00:14:06.126 "name": "BaseBdev3", 00:14:06.126 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:06.126 "is_configured": true, 00:14:06.126 "data_offset": 2048, 00:14:06.126 "data_size": 63488 00:14:06.126 } 00:14:06.126 ] 00:14:06.126 }' 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.126 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.696 [2024-11-26 12:57:24.138241] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.696 [2024-11-26 12:57:24.138271] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.696 [2024-11-26 12:57:24.138343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.696 [2024-11-26 12:57:24.138413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.696 [2024-11-26 12:57:24.138425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:06.696 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:06.955 /dev/nbd0 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.955 1+0 records in 00:14:06.955 1+0 records out 00:14:06.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301787 s, 13.6 MB/s 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:06.955 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:07.214 /dev/nbd1 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.214 1+0 records in 00:14:07.214 1+0 records out 00:14:07.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420691 s, 9.7 MB/s 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.214 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:07.474 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.474 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.474 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.474 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.474 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.474 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.474 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:07.475 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.475 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.475 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.475 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.735 [2024-11-26 12:57:25.166430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:07.735 [2024-11-26 12:57:25.166535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.735 [2024-11-26 12:57:25.166573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:07.735 [2024-11-26 12:57:25.166602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.735 [2024-11-26 12:57:25.168788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.735 [2024-11-26 12:57:25.168874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:07.735 [2024-11-26 12:57:25.168968] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:07.735 [2024-11-26 12:57:25.169038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.735 [2024-11-26 12:57:25.169160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.735 [2024-11-26 12:57:25.169276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.735 spare 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.735 [2024-11-26 12:57:25.269164] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:07.735 [2024-11-26 12:57:25.269242] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:07.735 [2024-11-26 12:57:25.269533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:14:07.735 [2024-11-26 12:57:25.269973] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:07.735 [2024-11-26 12:57:25.270027] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:07.735 [2024-11-26 12:57:25.270186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.735 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.735 "name": "raid_bdev1", 00:14:07.735 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:07.735 "strip_size_kb": 64, 00:14:07.735 "state": "online", 00:14:07.735 "raid_level": "raid5f", 00:14:07.735 "superblock": true, 00:14:07.735 "num_base_bdevs": 3, 00:14:07.735 "num_base_bdevs_discovered": 3, 00:14:07.735 "num_base_bdevs_operational": 3, 00:14:07.736 "base_bdevs_list": [ 00:14:07.736 { 00:14:07.736 "name": "spare", 00:14:07.736 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:07.736 "is_configured": true, 00:14:07.736 "data_offset": 2048, 00:14:07.736 "data_size": 63488 00:14:07.736 }, 00:14:07.736 { 00:14:07.736 "name": "BaseBdev2", 00:14:07.736 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:07.736 "is_configured": true, 00:14:07.736 "data_offset": 2048, 00:14:07.736 "data_size": 63488 00:14:07.736 }, 00:14:07.736 { 00:14:07.736 "name": "BaseBdev3", 00:14:07.736 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:07.736 "is_configured": true, 00:14:07.736 "data_offset": 2048, 00:14:07.736 "data_size": 63488 00:14:07.736 } 00:14:07.736 ] 00:14:07.736 }' 00:14:07.736 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.736 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.306 "name": "raid_bdev1", 00:14:08.306 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:08.306 "strip_size_kb": 64, 00:14:08.306 "state": "online", 00:14:08.306 "raid_level": "raid5f", 00:14:08.306 "superblock": true, 00:14:08.306 "num_base_bdevs": 3, 00:14:08.306 "num_base_bdevs_discovered": 3, 00:14:08.306 "num_base_bdevs_operational": 3, 00:14:08.306 "base_bdevs_list": [ 00:14:08.306 { 00:14:08.306 "name": "spare", 00:14:08.306 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:08.306 "is_configured": true, 00:14:08.306 "data_offset": 2048, 00:14:08.306 "data_size": 63488 00:14:08.306 }, 00:14:08.306 { 00:14:08.306 "name": "BaseBdev2", 00:14:08.306 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:08.306 "is_configured": true, 00:14:08.306 "data_offset": 2048, 00:14:08.306 "data_size": 63488 00:14:08.306 }, 00:14:08.306 { 00:14:08.306 "name": "BaseBdev3", 00:14:08.306 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:08.306 "is_configured": true, 00:14:08.306 "data_offset": 2048, 00:14:08.306 "data_size": 63488 00:14:08.306 } 00:14:08.306 ] 00:14:08.306 }' 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.306 [2024-11-26 12:57:25.909171] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.306 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.306 "name": "raid_bdev1", 00:14:08.306 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:08.306 "strip_size_kb": 64, 00:14:08.306 "state": "online", 00:14:08.306 "raid_level": "raid5f", 00:14:08.306 "superblock": true, 00:14:08.306 "num_base_bdevs": 3, 00:14:08.306 "num_base_bdevs_discovered": 2, 00:14:08.306 "num_base_bdevs_operational": 2, 00:14:08.307 "base_bdevs_list": [ 00:14:08.307 { 00:14:08.307 "name": null, 00:14:08.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.307 "is_configured": false, 00:14:08.307 "data_offset": 0, 00:14:08.307 "data_size": 63488 00:14:08.307 }, 00:14:08.307 { 00:14:08.307 "name": "BaseBdev2", 00:14:08.307 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:08.307 "is_configured": true, 00:14:08.307 "data_offset": 2048, 00:14:08.307 "data_size": 63488 00:14:08.307 }, 00:14:08.307 { 00:14:08.307 "name": "BaseBdev3", 00:14:08.307 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:08.307 "is_configured": true, 00:14:08.307 "data_offset": 2048, 00:14:08.307 "data_size": 63488 00:14:08.307 } 00:14:08.307 ] 00:14:08.307 }' 00:14:08.307 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.307 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.877 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.877 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.877 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.877 [2024-11-26 12:57:26.320479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.877 [2024-11-26 12:57:26.320599] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:08.877 [2024-11-26 12:57:26.320612] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:08.877 [2024-11-26 12:57:26.320657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.877 [2024-11-26 12:57:26.324279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:14:08.877 [2024-11-26 12:57:26.326218] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.877 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.877 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.866 "name": "raid_bdev1", 00:14:09.866 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:09.866 "strip_size_kb": 64, 00:14:09.866 "state": "online", 00:14:09.866 "raid_level": "raid5f", 00:14:09.866 "superblock": true, 00:14:09.866 "num_base_bdevs": 3, 00:14:09.866 "num_base_bdevs_discovered": 3, 00:14:09.866 "num_base_bdevs_operational": 3, 00:14:09.866 "process": { 00:14:09.866 "type": "rebuild", 00:14:09.866 "target": "spare", 00:14:09.866 "progress": { 00:14:09.866 "blocks": 20480, 00:14:09.866 "percent": 16 00:14:09.866 } 00:14:09.866 }, 00:14:09.866 "base_bdevs_list": [ 00:14:09.866 { 00:14:09.866 "name": "spare", 00:14:09.866 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:09.866 "is_configured": true, 00:14:09.866 "data_offset": 2048, 00:14:09.866 "data_size": 63488 00:14:09.866 }, 00:14:09.866 { 00:14:09.866 "name": "BaseBdev2", 00:14:09.866 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:09.866 "is_configured": true, 00:14:09.866 "data_offset": 2048, 00:14:09.866 "data_size": 63488 00:14:09.866 }, 00:14:09.866 { 00:14:09.866 "name": "BaseBdev3", 00:14:09.866 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:09.866 "is_configured": true, 00:14:09.866 "data_offset": 2048, 00:14:09.866 "data_size": 63488 00:14:09.866 } 00:14:09.866 ] 00:14:09.866 }' 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.866 [2024-11-26 12:57:27.472894] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.866 [2024-11-26 12:57:27.532748] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:09.866 [2024-11-26 12:57:27.532848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.866 [2024-11-26 12:57:27.532886] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.866 [2024-11-26 12:57:27.532905] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.866 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.127 "name": "raid_bdev1", 00:14:10.127 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:10.127 "strip_size_kb": 64, 00:14:10.127 "state": "online", 00:14:10.127 "raid_level": "raid5f", 00:14:10.127 "superblock": true, 00:14:10.127 "num_base_bdevs": 3, 00:14:10.127 "num_base_bdevs_discovered": 2, 00:14:10.127 "num_base_bdevs_operational": 2, 00:14:10.127 "base_bdevs_list": [ 00:14:10.127 { 00:14:10.127 "name": null, 00:14:10.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.127 "is_configured": false, 00:14:10.127 "data_offset": 0, 00:14:10.127 "data_size": 63488 00:14:10.127 }, 00:14:10.127 { 00:14:10.127 "name": "BaseBdev2", 00:14:10.127 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:10.127 "is_configured": true, 00:14:10.127 "data_offset": 2048, 00:14:10.127 "data_size": 63488 00:14:10.127 }, 00:14:10.127 { 00:14:10.127 "name": "BaseBdev3", 00:14:10.127 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:10.127 "is_configured": true, 00:14:10.127 "data_offset": 2048, 00:14:10.127 "data_size": 63488 00:14:10.127 } 00:14:10.127 ] 00:14:10.127 }' 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.127 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.387 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.387 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.387 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.387 [2024-11-26 12:57:28.040927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.387 [2024-11-26 12:57:28.041060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.387 [2024-11-26 12:57:28.041097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:10.387 [2024-11-26 12:57:28.041124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.387 [2024-11-26 12:57:28.041595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.387 [2024-11-26 12:57:28.041663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.387 [2024-11-26 12:57:28.041771] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:10.387 [2024-11-26 12:57:28.041808] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:10.387 [2024-11-26 12:57:28.041847] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:10.387 [2024-11-26 12:57:28.041888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.387 [2024-11-26 12:57:28.044617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:14:10.387 [2024-11-26 12:57:28.046703] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.387 spare 00:14:10.387 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.387 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.768 "name": "raid_bdev1", 00:14:11.768 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:11.768 "strip_size_kb": 64, 00:14:11.768 "state": "online", 00:14:11.768 "raid_level": "raid5f", 00:14:11.768 "superblock": true, 00:14:11.768 "num_base_bdevs": 3, 00:14:11.768 "num_base_bdevs_discovered": 3, 00:14:11.768 "num_base_bdevs_operational": 3, 00:14:11.768 "process": { 00:14:11.768 "type": "rebuild", 00:14:11.768 "target": "spare", 00:14:11.768 "progress": { 00:14:11.768 "blocks": 20480, 00:14:11.768 "percent": 16 00:14:11.768 } 00:14:11.768 }, 00:14:11.768 "base_bdevs_list": [ 00:14:11.768 { 00:14:11.768 "name": "spare", 00:14:11.768 "uuid": "381d9d17-6a08-5862-9584-4b64149aa62e", 00:14:11.768 "is_configured": true, 00:14:11.768 "data_offset": 2048, 00:14:11.768 "data_size": 63488 00:14:11.768 }, 00:14:11.768 { 00:14:11.768 "name": "BaseBdev2", 00:14:11.768 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:11.768 "is_configured": true, 00:14:11.768 "data_offset": 2048, 00:14:11.768 "data_size": 63488 00:14:11.768 }, 00:14:11.768 { 00:14:11.768 "name": "BaseBdev3", 00:14:11.768 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:11.768 "is_configured": true, 00:14:11.768 "data_offset": 2048, 00:14:11.768 "data_size": 63488 00:14:11.768 } 00:14:11.768 ] 00:14:11.768 }' 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.768 [2024-11-26 12:57:29.209317] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.768 [2024-11-26 12:57:29.253151] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:11.768 [2024-11-26 12:57:29.253277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.768 [2024-11-26 12:57:29.253313] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.768 [2024-11-26 12:57:29.253345] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.768 "name": "raid_bdev1", 00:14:11.768 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:11.768 "strip_size_kb": 64, 00:14:11.768 "state": "online", 00:14:11.768 "raid_level": "raid5f", 00:14:11.768 "superblock": true, 00:14:11.768 "num_base_bdevs": 3, 00:14:11.768 "num_base_bdevs_discovered": 2, 00:14:11.768 "num_base_bdevs_operational": 2, 00:14:11.768 "base_bdevs_list": [ 00:14:11.768 { 00:14:11.768 "name": null, 00:14:11.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.768 "is_configured": false, 00:14:11.768 "data_offset": 0, 00:14:11.768 "data_size": 63488 00:14:11.768 }, 00:14:11.768 { 00:14:11.768 "name": "BaseBdev2", 00:14:11.768 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:11.768 "is_configured": true, 00:14:11.768 "data_offset": 2048, 00:14:11.768 "data_size": 63488 00:14:11.768 }, 00:14:11.768 { 00:14:11.768 "name": "BaseBdev3", 00:14:11.768 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:11.768 "is_configured": true, 00:14:11.768 "data_offset": 2048, 00:14:11.768 "data_size": 63488 00:14:11.768 } 00:14:11.768 ] 00:14:11.768 }' 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.768 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.337 "name": "raid_bdev1", 00:14:12.337 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:12.337 "strip_size_kb": 64, 00:14:12.337 "state": "online", 00:14:12.337 "raid_level": "raid5f", 00:14:12.337 "superblock": true, 00:14:12.337 "num_base_bdevs": 3, 00:14:12.337 "num_base_bdevs_discovered": 2, 00:14:12.337 "num_base_bdevs_operational": 2, 00:14:12.337 "base_bdevs_list": [ 00:14:12.337 { 00:14:12.337 "name": null, 00:14:12.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.337 "is_configured": false, 00:14:12.337 "data_offset": 0, 00:14:12.337 "data_size": 63488 00:14:12.337 }, 00:14:12.337 { 00:14:12.337 "name": "BaseBdev2", 00:14:12.337 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:12.337 "is_configured": true, 00:14:12.337 "data_offset": 2048, 00:14:12.337 "data_size": 63488 00:14:12.337 }, 00:14:12.337 { 00:14:12.337 "name": "BaseBdev3", 00:14:12.337 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:12.337 "is_configured": true, 00:14:12.337 "data_offset": 2048, 00:14:12.337 "data_size": 63488 00:14:12.337 } 00:14:12.337 ] 00:14:12.337 }' 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.337 [2024-11-26 12:57:29.881079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.337 [2024-11-26 12:57:29.881217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.337 [2024-11-26 12:57:29.881243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:12.337 [2024-11-26 12:57:29.881253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.337 [2024-11-26 12:57:29.881636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.337 [2024-11-26 12:57:29.881656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.337 [2024-11-26 12:57:29.881717] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:12.337 [2024-11-26 12:57:29.881732] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:12.337 [2024-11-26 12:57:29.881740] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:12.337 [2024-11-26 12:57:29.881760] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:12.337 BaseBdev1 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.337 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.277 "name": "raid_bdev1", 00:14:13.277 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:13.277 "strip_size_kb": 64, 00:14:13.277 "state": "online", 00:14:13.277 "raid_level": "raid5f", 00:14:13.277 "superblock": true, 00:14:13.277 "num_base_bdevs": 3, 00:14:13.277 "num_base_bdevs_discovered": 2, 00:14:13.277 "num_base_bdevs_operational": 2, 00:14:13.277 "base_bdevs_list": [ 00:14:13.277 { 00:14:13.277 "name": null, 00:14:13.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.277 "is_configured": false, 00:14:13.277 "data_offset": 0, 00:14:13.277 "data_size": 63488 00:14:13.277 }, 00:14:13.277 { 00:14:13.277 "name": "BaseBdev2", 00:14:13.277 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:13.277 "is_configured": true, 00:14:13.277 "data_offset": 2048, 00:14:13.277 "data_size": 63488 00:14:13.277 }, 00:14:13.277 { 00:14:13.277 "name": "BaseBdev3", 00:14:13.277 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:13.277 "is_configured": true, 00:14:13.277 "data_offset": 2048, 00:14:13.277 "data_size": 63488 00:14:13.277 } 00:14:13.277 ] 00:14:13.277 }' 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.277 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.846 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.846 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.847 "name": "raid_bdev1", 00:14:13.847 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:13.847 "strip_size_kb": 64, 00:14:13.847 "state": "online", 00:14:13.847 "raid_level": "raid5f", 00:14:13.847 "superblock": true, 00:14:13.847 "num_base_bdevs": 3, 00:14:13.847 "num_base_bdevs_discovered": 2, 00:14:13.847 "num_base_bdevs_operational": 2, 00:14:13.847 "base_bdevs_list": [ 00:14:13.847 { 00:14:13.847 "name": null, 00:14:13.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.847 "is_configured": false, 00:14:13.847 "data_offset": 0, 00:14:13.847 "data_size": 63488 00:14:13.847 }, 00:14:13.847 { 00:14:13.847 "name": "BaseBdev2", 00:14:13.847 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:13.847 "is_configured": true, 00:14:13.847 "data_offset": 2048, 00:14:13.847 "data_size": 63488 00:14:13.847 }, 00:14:13.847 { 00:14:13.847 "name": "BaseBdev3", 00:14:13.847 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:13.847 "is_configured": true, 00:14:13.847 "data_offset": 2048, 00:14:13.847 "data_size": 63488 00:14:13.847 } 00:14:13.847 ] 00:14:13.847 }' 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.847 [2024-11-26 12:57:31.494594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.847 [2024-11-26 12:57:31.494702] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:13.847 [2024-11-26 12:57:31.494720] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:13.847 request: 00:14:13.847 { 00:14:13.847 "base_bdev": "BaseBdev1", 00:14:13.847 "raid_bdev": "raid_bdev1", 00:14:13.847 "method": "bdev_raid_add_base_bdev", 00:14:13.847 "req_id": 1 00:14:13.847 } 00:14:13.847 Got JSON-RPC error response 00:14:13.847 response: 00:14:13.847 { 00:14:13.847 "code": -22, 00:14:13.847 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:13.847 } 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:13.847 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.227 "name": "raid_bdev1", 00:14:15.227 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:15.227 "strip_size_kb": 64, 00:14:15.227 "state": "online", 00:14:15.227 "raid_level": "raid5f", 00:14:15.227 "superblock": true, 00:14:15.227 "num_base_bdevs": 3, 00:14:15.227 "num_base_bdevs_discovered": 2, 00:14:15.227 "num_base_bdevs_operational": 2, 00:14:15.227 "base_bdevs_list": [ 00:14:15.227 { 00:14:15.227 "name": null, 00:14:15.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.227 "is_configured": false, 00:14:15.227 "data_offset": 0, 00:14:15.227 "data_size": 63488 00:14:15.227 }, 00:14:15.227 { 00:14:15.227 "name": "BaseBdev2", 00:14:15.227 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:15.227 "is_configured": true, 00:14:15.227 "data_offset": 2048, 00:14:15.227 "data_size": 63488 00:14:15.227 }, 00:14:15.227 { 00:14:15.227 "name": "BaseBdev3", 00:14:15.227 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:15.227 "is_configured": true, 00:14:15.227 "data_offset": 2048, 00:14:15.227 "data_size": 63488 00:14:15.227 } 00:14:15.227 ] 00:14:15.227 }' 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.227 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.488 "name": "raid_bdev1", 00:14:15.488 "uuid": "163c86f6-ab78-46ce-ad53-881a69582158", 00:14:15.488 "strip_size_kb": 64, 00:14:15.488 "state": "online", 00:14:15.488 "raid_level": "raid5f", 00:14:15.488 "superblock": true, 00:14:15.488 "num_base_bdevs": 3, 00:14:15.488 "num_base_bdevs_discovered": 2, 00:14:15.488 "num_base_bdevs_operational": 2, 00:14:15.488 "base_bdevs_list": [ 00:14:15.488 { 00:14:15.488 "name": null, 00:14:15.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.488 "is_configured": false, 00:14:15.488 "data_offset": 0, 00:14:15.488 "data_size": 63488 00:14:15.488 }, 00:14:15.488 { 00:14:15.488 "name": "BaseBdev2", 00:14:15.488 "uuid": "4eb398ef-3593-5f34-8ed6-cf7dd38fce2f", 00:14:15.488 "is_configured": true, 00:14:15.488 "data_offset": 2048, 00:14:15.488 "data_size": 63488 00:14:15.488 }, 00:14:15.488 { 00:14:15.488 "name": "BaseBdev3", 00:14:15.488 "uuid": "18519ccf-0ba1-5443-8837-c562d087513f", 00:14:15.488 "is_configured": true, 00:14:15.488 "data_offset": 2048, 00:14:15.488 "data_size": 63488 00:14:15.488 } 00:14:15.488 ] 00:14:15.488 }' 00:14:15.488 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.488 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.488 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.488 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.488 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92725 00:14:15.488 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92725 ']' 00:14:15.489 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92725 00:14:15.489 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:15.489 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.489 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92725 00:14:15.489 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.489 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.489 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92725' 00:14:15.489 killing process with pid 92725 00:14:15.489 Received shutdown signal, test time was about 60.000000 seconds 00:14:15.489 00:14:15.489 Latency(us) 00:14:15.489 [2024-11-26T12:57:33.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.489 [2024-11-26T12:57:33.173Z] =================================================================================================================== 00:14:15.489 [2024-11-26T12:57:33.173Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:15.489 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92725 00:14:15.489 [2024-11-26 12:57:33.129881] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.489 [2024-11-26 12:57:33.129977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.489 [2024-11-26 12:57:33.130032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.489 [2024-11-26 12:57:33.130042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:15.489 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92725 00:14:15.749 [2024-11-26 12:57:33.171511] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.749 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:15.749 ************************************ 00:14:15.749 END TEST raid5f_rebuild_test_sb 00:14:15.749 ************************************ 00:14:15.749 00:14:15.749 real 0m21.387s 00:14:15.749 user 0m27.668s 00:14:15.749 sys 0m2.751s 00:14:15.749 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.750 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.011 12:57:33 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:16.011 12:57:33 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:16.011 12:57:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:16.011 12:57:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.011 12:57:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.011 ************************************ 00:14:16.011 START TEST raid5f_state_function_test 00:14:16.011 ************************************ 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93478 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93478' 00:14:16.011 Process raid pid: 93478 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93478 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93478 ']' 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.011 12:57:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.011 [2024-11-26 12:57:33.580602] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:16.011 [2024-11-26 12:57:33.580774] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.271 [2024-11-26 12:57:33.739154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.271 [2024-11-26 12:57:33.785672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.271 [2024-11-26 12:57:33.828279] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.271 [2024-11-26 12:57:33.828396] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.840 [2024-11-26 12:57:34.413802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.840 [2024-11-26 12:57:34.413848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.840 [2024-11-26 12:57:34.413859] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.840 [2024-11-26 12:57:34.413868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.840 [2024-11-26 12:57:34.413873] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:16.840 [2024-11-26 12:57:34.413885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:16.840 [2024-11-26 12:57:34.413891] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:16.840 [2024-11-26 12:57:34.413899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.840 "name": "Existed_Raid", 00:14:16.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.840 "strip_size_kb": 64, 00:14:16.840 "state": "configuring", 00:14:16.840 "raid_level": "raid5f", 00:14:16.840 "superblock": false, 00:14:16.840 "num_base_bdevs": 4, 00:14:16.840 "num_base_bdevs_discovered": 0, 00:14:16.840 "num_base_bdevs_operational": 4, 00:14:16.840 "base_bdevs_list": [ 00:14:16.840 { 00:14:16.840 "name": "BaseBdev1", 00:14:16.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.840 "is_configured": false, 00:14:16.840 "data_offset": 0, 00:14:16.840 "data_size": 0 00:14:16.840 }, 00:14:16.840 { 00:14:16.840 "name": "BaseBdev2", 00:14:16.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.840 "is_configured": false, 00:14:16.840 "data_offset": 0, 00:14:16.840 "data_size": 0 00:14:16.840 }, 00:14:16.840 { 00:14:16.840 "name": "BaseBdev3", 00:14:16.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.840 "is_configured": false, 00:14:16.840 "data_offset": 0, 00:14:16.840 "data_size": 0 00:14:16.840 }, 00:14:16.840 { 00:14:16.840 "name": "BaseBdev4", 00:14:16.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.840 "is_configured": false, 00:14:16.840 "data_offset": 0, 00:14:16.840 "data_size": 0 00:14:16.840 } 00:14:16.840 ] 00:14:16.840 }' 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.840 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.410 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:17.410 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.410 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.410 [2024-11-26 12:57:34.916837] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.410 [2024-11-26 12:57:34.916933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:17.410 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.410 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:17.410 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.410 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.410 [2024-11-26 12:57:34.928869] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:17.410 [2024-11-26 12:57:34.928968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:17.410 [2024-11-26 12:57:34.929004] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.410 [2024-11-26 12:57:34.929024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.410 [2024-11-26 12:57:34.929040] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:17.410 [2024-11-26 12:57:34.929059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:17.411 [2024-11-26 12:57:34.929090] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:17.411 [2024-11-26 12:57:34.929110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.411 [2024-11-26 12:57:34.949761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.411 BaseBdev1 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.411 [ 00:14:17.411 { 00:14:17.411 "name": "BaseBdev1", 00:14:17.411 "aliases": [ 00:14:17.411 "2c947c5a-396d-479a-afe2-b112bb20c457" 00:14:17.411 ], 00:14:17.411 "product_name": "Malloc disk", 00:14:17.411 "block_size": 512, 00:14:17.411 "num_blocks": 65536, 00:14:17.411 "uuid": "2c947c5a-396d-479a-afe2-b112bb20c457", 00:14:17.411 "assigned_rate_limits": { 00:14:17.411 "rw_ios_per_sec": 0, 00:14:17.411 "rw_mbytes_per_sec": 0, 00:14:17.411 "r_mbytes_per_sec": 0, 00:14:17.411 "w_mbytes_per_sec": 0 00:14:17.411 }, 00:14:17.411 "claimed": true, 00:14:17.411 "claim_type": "exclusive_write", 00:14:17.411 "zoned": false, 00:14:17.411 "supported_io_types": { 00:14:17.411 "read": true, 00:14:17.411 "write": true, 00:14:17.411 "unmap": true, 00:14:17.411 "flush": true, 00:14:17.411 "reset": true, 00:14:17.411 "nvme_admin": false, 00:14:17.411 "nvme_io": false, 00:14:17.411 "nvme_io_md": false, 00:14:17.411 "write_zeroes": true, 00:14:17.411 "zcopy": true, 00:14:17.411 "get_zone_info": false, 00:14:17.411 "zone_management": false, 00:14:17.411 "zone_append": false, 00:14:17.411 "compare": false, 00:14:17.411 "compare_and_write": false, 00:14:17.411 "abort": true, 00:14:17.411 "seek_hole": false, 00:14:17.411 "seek_data": false, 00:14:17.411 "copy": true, 00:14:17.411 "nvme_iov_md": false 00:14:17.411 }, 00:14:17.411 "memory_domains": [ 00:14:17.411 { 00:14:17.411 "dma_device_id": "system", 00:14:17.411 "dma_device_type": 1 00:14:17.411 }, 00:14:17.411 { 00:14:17.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.411 "dma_device_type": 2 00:14:17.411 } 00:14:17.411 ], 00:14:17.411 "driver_specific": {} 00:14:17.411 } 00:14:17.411 ] 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.411 12:57:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.411 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.411 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.411 "name": "Existed_Raid", 00:14:17.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.411 "strip_size_kb": 64, 00:14:17.411 "state": "configuring", 00:14:17.411 "raid_level": "raid5f", 00:14:17.411 "superblock": false, 00:14:17.411 "num_base_bdevs": 4, 00:14:17.411 "num_base_bdevs_discovered": 1, 00:14:17.411 "num_base_bdevs_operational": 4, 00:14:17.411 "base_bdevs_list": [ 00:14:17.411 { 00:14:17.411 "name": "BaseBdev1", 00:14:17.411 "uuid": "2c947c5a-396d-479a-afe2-b112bb20c457", 00:14:17.411 "is_configured": true, 00:14:17.411 "data_offset": 0, 00:14:17.411 "data_size": 65536 00:14:17.411 }, 00:14:17.411 { 00:14:17.411 "name": "BaseBdev2", 00:14:17.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.411 "is_configured": false, 00:14:17.411 "data_offset": 0, 00:14:17.411 "data_size": 0 00:14:17.411 }, 00:14:17.411 { 00:14:17.411 "name": "BaseBdev3", 00:14:17.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.411 "is_configured": false, 00:14:17.411 "data_offset": 0, 00:14:17.411 "data_size": 0 00:14:17.411 }, 00:14:17.411 { 00:14:17.411 "name": "BaseBdev4", 00:14:17.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.411 "is_configured": false, 00:14:17.411 "data_offset": 0, 00:14:17.411 "data_size": 0 00:14:17.411 } 00:14:17.411 ] 00:14:17.411 }' 00:14:17.411 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.411 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.981 [2024-11-26 12:57:35.432954] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.981 [2024-11-26 12:57:35.433054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.981 [2024-11-26 12:57:35.444979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.981 [2024-11-26 12:57:35.446791] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.981 [2024-11-26 12:57:35.446829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.981 [2024-11-26 12:57:35.446837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:17.981 [2024-11-26 12:57:35.446845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:17.981 [2024-11-26 12:57:35.446851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:17.981 [2024-11-26 12:57:35.446859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.981 "name": "Existed_Raid", 00:14:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.981 "strip_size_kb": 64, 00:14:17.981 "state": "configuring", 00:14:17.981 "raid_level": "raid5f", 00:14:17.981 "superblock": false, 00:14:17.981 "num_base_bdevs": 4, 00:14:17.981 "num_base_bdevs_discovered": 1, 00:14:17.981 "num_base_bdevs_operational": 4, 00:14:17.981 "base_bdevs_list": [ 00:14:17.981 { 00:14:17.981 "name": "BaseBdev1", 00:14:17.981 "uuid": "2c947c5a-396d-479a-afe2-b112bb20c457", 00:14:17.981 "is_configured": true, 00:14:17.981 "data_offset": 0, 00:14:17.981 "data_size": 65536 00:14:17.981 }, 00:14:17.981 { 00:14:17.981 "name": "BaseBdev2", 00:14:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.981 "is_configured": false, 00:14:17.981 "data_offset": 0, 00:14:17.981 "data_size": 0 00:14:17.981 }, 00:14:17.981 { 00:14:17.981 "name": "BaseBdev3", 00:14:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.981 "is_configured": false, 00:14:17.981 "data_offset": 0, 00:14:17.981 "data_size": 0 00:14:17.981 }, 00:14:17.981 { 00:14:17.981 "name": "BaseBdev4", 00:14:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.981 "is_configured": false, 00:14:17.981 "data_offset": 0, 00:14:17.981 "data_size": 0 00:14:17.981 } 00:14:17.981 ] 00:14:17.981 }' 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.981 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.241 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.241 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.241 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.501 [2024-11-26 12:57:35.936852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.501 BaseBdev2 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.501 [ 00:14:18.501 { 00:14:18.501 "name": "BaseBdev2", 00:14:18.501 "aliases": [ 00:14:18.501 "ba245a2f-8e37-433e-9826-affcf4abfeda" 00:14:18.501 ], 00:14:18.501 "product_name": "Malloc disk", 00:14:18.501 "block_size": 512, 00:14:18.501 "num_blocks": 65536, 00:14:18.501 "uuid": "ba245a2f-8e37-433e-9826-affcf4abfeda", 00:14:18.501 "assigned_rate_limits": { 00:14:18.501 "rw_ios_per_sec": 0, 00:14:18.501 "rw_mbytes_per_sec": 0, 00:14:18.501 "r_mbytes_per_sec": 0, 00:14:18.501 "w_mbytes_per_sec": 0 00:14:18.501 }, 00:14:18.501 "claimed": true, 00:14:18.501 "claim_type": "exclusive_write", 00:14:18.501 "zoned": false, 00:14:18.501 "supported_io_types": { 00:14:18.501 "read": true, 00:14:18.501 "write": true, 00:14:18.501 "unmap": true, 00:14:18.501 "flush": true, 00:14:18.501 "reset": true, 00:14:18.501 "nvme_admin": false, 00:14:18.501 "nvme_io": false, 00:14:18.501 "nvme_io_md": false, 00:14:18.501 "write_zeroes": true, 00:14:18.501 "zcopy": true, 00:14:18.501 "get_zone_info": false, 00:14:18.501 "zone_management": false, 00:14:18.501 "zone_append": false, 00:14:18.501 "compare": false, 00:14:18.501 "compare_and_write": false, 00:14:18.501 "abort": true, 00:14:18.501 "seek_hole": false, 00:14:18.501 "seek_data": false, 00:14:18.501 "copy": true, 00:14:18.501 "nvme_iov_md": false 00:14:18.501 }, 00:14:18.501 "memory_domains": [ 00:14:18.501 { 00:14:18.501 "dma_device_id": "system", 00:14:18.501 "dma_device_type": 1 00:14:18.501 }, 00:14:18.501 { 00:14:18.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.501 "dma_device_type": 2 00:14:18.501 } 00:14:18.501 ], 00:14:18.501 "driver_specific": {} 00:14:18.501 } 00:14:18.501 ] 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.501 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.502 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.502 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.502 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.502 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.502 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.502 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.502 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.502 12:57:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.502 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.502 12:57:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.502 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.502 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.502 "name": "Existed_Raid", 00:14:18.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.502 "strip_size_kb": 64, 00:14:18.502 "state": "configuring", 00:14:18.502 "raid_level": "raid5f", 00:14:18.502 "superblock": false, 00:14:18.502 "num_base_bdevs": 4, 00:14:18.502 "num_base_bdevs_discovered": 2, 00:14:18.502 "num_base_bdevs_operational": 4, 00:14:18.502 "base_bdevs_list": [ 00:14:18.502 { 00:14:18.502 "name": "BaseBdev1", 00:14:18.502 "uuid": "2c947c5a-396d-479a-afe2-b112bb20c457", 00:14:18.502 "is_configured": true, 00:14:18.502 "data_offset": 0, 00:14:18.502 "data_size": 65536 00:14:18.502 }, 00:14:18.502 { 00:14:18.502 "name": "BaseBdev2", 00:14:18.502 "uuid": "ba245a2f-8e37-433e-9826-affcf4abfeda", 00:14:18.502 "is_configured": true, 00:14:18.502 "data_offset": 0, 00:14:18.502 "data_size": 65536 00:14:18.502 }, 00:14:18.502 { 00:14:18.502 "name": "BaseBdev3", 00:14:18.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.502 "is_configured": false, 00:14:18.502 "data_offset": 0, 00:14:18.502 "data_size": 0 00:14:18.502 }, 00:14:18.502 { 00:14:18.502 "name": "BaseBdev4", 00:14:18.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.502 "is_configured": false, 00:14:18.502 "data_offset": 0, 00:14:18.502 "data_size": 0 00:14:18.502 } 00:14:18.502 ] 00:14:18.502 }' 00:14:18.502 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.502 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.762 [2024-11-26 12:57:36.427070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.762 BaseBdev3 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.762 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.022 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.022 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:19.022 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.022 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.022 [ 00:14:19.022 { 00:14:19.022 "name": "BaseBdev3", 00:14:19.022 "aliases": [ 00:14:19.022 "ba8754cc-df39-426d-bfd5-b9ca0d94a947" 00:14:19.022 ], 00:14:19.022 "product_name": "Malloc disk", 00:14:19.022 "block_size": 512, 00:14:19.022 "num_blocks": 65536, 00:14:19.022 "uuid": "ba8754cc-df39-426d-bfd5-b9ca0d94a947", 00:14:19.022 "assigned_rate_limits": { 00:14:19.022 "rw_ios_per_sec": 0, 00:14:19.022 "rw_mbytes_per_sec": 0, 00:14:19.022 "r_mbytes_per_sec": 0, 00:14:19.022 "w_mbytes_per_sec": 0 00:14:19.022 }, 00:14:19.022 "claimed": true, 00:14:19.022 "claim_type": "exclusive_write", 00:14:19.022 "zoned": false, 00:14:19.022 "supported_io_types": { 00:14:19.022 "read": true, 00:14:19.022 "write": true, 00:14:19.022 "unmap": true, 00:14:19.022 "flush": true, 00:14:19.022 "reset": true, 00:14:19.022 "nvme_admin": false, 00:14:19.022 "nvme_io": false, 00:14:19.022 "nvme_io_md": false, 00:14:19.022 "write_zeroes": true, 00:14:19.023 "zcopy": true, 00:14:19.023 "get_zone_info": false, 00:14:19.023 "zone_management": false, 00:14:19.023 "zone_append": false, 00:14:19.023 "compare": false, 00:14:19.023 "compare_and_write": false, 00:14:19.023 "abort": true, 00:14:19.023 "seek_hole": false, 00:14:19.023 "seek_data": false, 00:14:19.023 "copy": true, 00:14:19.023 "nvme_iov_md": false 00:14:19.023 }, 00:14:19.023 "memory_domains": [ 00:14:19.023 { 00:14:19.023 "dma_device_id": "system", 00:14:19.023 "dma_device_type": 1 00:14:19.023 }, 00:14:19.023 { 00:14:19.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.023 "dma_device_type": 2 00:14:19.023 } 00:14:19.023 ], 00:14:19.023 "driver_specific": {} 00:14:19.023 } 00:14:19.023 ] 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.023 "name": "Existed_Raid", 00:14:19.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.023 "strip_size_kb": 64, 00:14:19.023 "state": "configuring", 00:14:19.023 "raid_level": "raid5f", 00:14:19.023 "superblock": false, 00:14:19.023 "num_base_bdevs": 4, 00:14:19.023 "num_base_bdevs_discovered": 3, 00:14:19.023 "num_base_bdevs_operational": 4, 00:14:19.023 "base_bdevs_list": [ 00:14:19.023 { 00:14:19.023 "name": "BaseBdev1", 00:14:19.023 "uuid": "2c947c5a-396d-479a-afe2-b112bb20c457", 00:14:19.023 "is_configured": true, 00:14:19.023 "data_offset": 0, 00:14:19.023 "data_size": 65536 00:14:19.023 }, 00:14:19.023 { 00:14:19.023 "name": "BaseBdev2", 00:14:19.023 "uuid": "ba245a2f-8e37-433e-9826-affcf4abfeda", 00:14:19.023 "is_configured": true, 00:14:19.023 "data_offset": 0, 00:14:19.023 "data_size": 65536 00:14:19.023 }, 00:14:19.023 { 00:14:19.023 "name": "BaseBdev3", 00:14:19.023 "uuid": "ba8754cc-df39-426d-bfd5-b9ca0d94a947", 00:14:19.023 "is_configured": true, 00:14:19.023 "data_offset": 0, 00:14:19.023 "data_size": 65536 00:14:19.023 }, 00:14:19.023 { 00:14:19.023 "name": "BaseBdev4", 00:14:19.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.023 "is_configured": false, 00:14:19.023 "data_offset": 0, 00:14:19.023 "data_size": 0 00:14:19.023 } 00:14:19.023 ] 00:14:19.023 }' 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.023 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.283 [2024-11-26 12:57:36.953220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:19.283 [2024-11-26 12:57:36.953363] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:19.283 [2024-11-26 12:57:36.953388] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:19.283 [2024-11-26 12:57:36.953711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:19.283 [2024-11-26 12:57:36.954207] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:19.283 [2024-11-26 12:57:36.954261] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:19.283 [2024-11-26 12:57:36.954488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.283 BaseBdev4 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.283 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.543 [ 00:14:19.543 { 00:14:19.543 "name": "BaseBdev4", 00:14:19.543 "aliases": [ 00:14:19.543 "df1778e8-9de8-4aee-b267-228c0ef89d57" 00:14:19.543 ], 00:14:19.543 "product_name": "Malloc disk", 00:14:19.543 "block_size": 512, 00:14:19.543 "num_blocks": 65536, 00:14:19.543 "uuid": "df1778e8-9de8-4aee-b267-228c0ef89d57", 00:14:19.543 "assigned_rate_limits": { 00:14:19.543 "rw_ios_per_sec": 0, 00:14:19.543 "rw_mbytes_per_sec": 0, 00:14:19.543 "r_mbytes_per_sec": 0, 00:14:19.543 "w_mbytes_per_sec": 0 00:14:19.543 }, 00:14:19.543 "claimed": true, 00:14:19.543 "claim_type": "exclusive_write", 00:14:19.543 "zoned": false, 00:14:19.543 "supported_io_types": { 00:14:19.543 "read": true, 00:14:19.543 "write": true, 00:14:19.543 "unmap": true, 00:14:19.543 "flush": true, 00:14:19.543 "reset": true, 00:14:19.543 "nvme_admin": false, 00:14:19.543 "nvme_io": false, 00:14:19.543 "nvme_io_md": false, 00:14:19.543 "write_zeroes": true, 00:14:19.543 "zcopy": true, 00:14:19.543 "get_zone_info": false, 00:14:19.543 "zone_management": false, 00:14:19.543 "zone_append": false, 00:14:19.543 "compare": false, 00:14:19.543 "compare_and_write": false, 00:14:19.543 "abort": true, 00:14:19.543 "seek_hole": false, 00:14:19.543 "seek_data": false, 00:14:19.543 "copy": true, 00:14:19.543 "nvme_iov_md": false 00:14:19.543 }, 00:14:19.543 "memory_domains": [ 00:14:19.543 { 00:14:19.543 "dma_device_id": "system", 00:14:19.543 "dma_device_type": 1 00:14:19.543 }, 00:14:19.543 { 00:14:19.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.543 "dma_device_type": 2 00:14:19.543 } 00:14:19.543 ], 00:14:19.543 "driver_specific": {} 00:14:19.543 } 00:14:19.543 ] 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.543 12:57:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.543 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.543 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.543 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.543 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.544 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.544 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.544 "name": "Existed_Raid", 00:14:19.544 "uuid": "d2c1fd33-de45-453d-948c-c63acf109eba", 00:14:19.544 "strip_size_kb": 64, 00:14:19.544 "state": "online", 00:14:19.544 "raid_level": "raid5f", 00:14:19.544 "superblock": false, 00:14:19.544 "num_base_bdevs": 4, 00:14:19.544 "num_base_bdevs_discovered": 4, 00:14:19.544 "num_base_bdevs_operational": 4, 00:14:19.544 "base_bdevs_list": [ 00:14:19.544 { 00:14:19.544 "name": "BaseBdev1", 00:14:19.544 "uuid": "2c947c5a-396d-479a-afe2-b112bb20c457", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 0, 00:14:19.544 "data_size": 65536 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev2", 00:14:19.544 "uuid": "ba245a2f-8e37-433e-9826-affcf4abfeda", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 0, 00:14:19.544 "data_size": 65536 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev3", 00:14:19.544 "uuid": "ba8754cc-df39-426d-bfd5-b9ca0d94a947", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 0, 00:14:19.544 "data_size": 65536 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev4", 00:14:19.544 "uuid": "df1778e8-9de8-4aee-b267-228c0ef89d57", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 0, 00:14:19.544 "data_size": 65536 00:14:19.544 } 00:14:19.544 ] 00:14:19.544 }' 00:14:19.544 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.544 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.803 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:19.803 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:19.803 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:19.803 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:19.804 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:19.804 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:19.804 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:19.804 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:19.804 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.804 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.804 [2024-11-26 12:57:37.440666] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.804 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.804 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:19.804 "name": "Existed_Raid", 00:14:19.804 "aliases": [ 00:14:19.804 "d2c1fd33-de45-453d-948c-c63acf109eba" 00:14:19.804 ], 00:14:19.804 "product_name": "Raid Volume", 00:14:19.804 "block_size": 512, 00:14:19.804 "num_blocks": 196608, 00:14:19.804 "uuid": "d2c1fd33-de45-453d-948c-c63acf109eba", 00:14:19.804 "assigned_rate_limits": { 00:14:19.804 "rw_ios_per_sec": 0, 00:14:19.804 "rw_mbytes_per_sec": 0, 00:14:19.804 "r_mbytes_per_sec": 0, 00:14:19.804 "w_mbytes_per_sec": 0 00:14:19.804 }, 00:14:19.804 "claimed": false, 00:14:19.804 "zoned": false, 00:14:19.804 "supported_io_types": { 00:14:19.804 "read": true, 00:14:19.804 "write": true, 00:14:19.804 "unmap": false, 00:14:19.804 "flush": false, 00:14:19.804 "reset": true, 00:14:19.804 "nvme_admin": false, 00:14:19.804 "nvme_io": false, 00:14:19.804 "nvme_io_md": false, 00:14:19.804 "write_zeroes": true, 00:14:19.804 "zcopy": false, 00:14:19.804 "get_zone_info": false, 00:14:19.804 "zone_management": false, 00:14:19.804 "zone_append": false, 00:14:19.804 "compare": false, 00:14:19.804 "compare_and_write": false, 00:14:19.804 "abort": false, 00:14:19.804 "seek_hole": false, 00:14:19.804 "seek_data": false, 00:14:19.804 "copy": false, 00:14:19.804 "nvme_iov_md": false 00:14:19.804 }, 00:14:19.804 "driver_specific": { 00:14:19.804 "raid": { 00:14:19.804 "uuid": "d2c1fd33-de45-453d-948c-c63acf109eba", 00:14:19.804 "strip_size_kb": 64, 00:14:19.804 "state": "online", 00:14:19.804 "raid_level": "raid5f", 00:14:19.804 "superblock": false, 00:14:19.804 "num_base_bdevs": 4, 00:14:19.804 "num_base_bdevs_discovered": 4, 00:14:19.804 "num_base_bdevs_operational": 4, 00:14:19.804 "base_bdevs_list": [ 00:14:19.804 { 00:14:19.804 "name": "BaseBdev1", 00:14:19.804 "uuid": "2c947c5a-396d-479a-afe2-b112bb20c457", 00:14:19.804 "is_configured": true, 00:14:19.804 "data_offset": 0, 00:14:19.804 "data_size": 65536 00:14:19.804 }, 00:14:19.804 { 00:14:19.804 "name": "BaseBdev2", 00:14:19.804 "uuid": "ba245a2f-8e37-433e-9826-affcf4abfeda", 00:14:19.804 "is_configured": true, 00:14:19.804 "data_offset": 0, 00:14:19.804 "data_size": 65536 00:14:19.804 }, 00:14:19.804 { 00:14:19.804 "name": "BaseBdev3", 00:14:19.804 "uuid": "ba8754cc-df39-426d-bfd5-b9ca0d94a947", 00:14:19.804 "is_configured": true, 00:14:19.804 "data_offset": 0, 00:14:19.804 "data_size": 65536 00:14:19.804 }, 00:14:19.804 { 00:14:19.804 "name": "BaseBdev4", 00:14:19.804 "uuid": "df1778e8-9de8-4aee-b267-228c0ef89d57", 00:14:19.804 "is_configured": true, 00:14:19.804 "data_offset": 0, 00:14:19.804 "data_size": 65536 00:14:19.804 } 00:14:19.804 ] 00:14:19.804 } 00:14:19.804 } 00:14:19.804 }' 00:14:19.804 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:20.064 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:20.064 BaseBdev2 00:14:20.064 BaseBdev3 00:14:20.064 BaseBdev4' 00:14:20.064 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.064 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:20.064 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.064 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.064 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.065 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.065 [2024-11-26 12:57:37.735987] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.325 "name": "Existed_Raid", 00:14:20.325 "uuid": "d2c1fd33-de45-453d-948c-c63acf109eba", 00:14:20.325 "strip_size_kb": 64, 00:14:20.325 "state": "online", 00:14:20.325 "raid_level": "raid5f", 00:14:20.325 "superblock": false, 00:14:20.325 "num_base_bdevs": 4, 00:14:20.325 "num_base_bdevs_discovered": 3, 00:14:20.325 "num_base_bdevs_operational": 3, 00:14:20.325 "base_bdevs_list": [ 00:14:20.325 { 00:14:20.325 "name": null, 00:14:20.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.325 "is_configured": false, 00:14:20.325 "data_offset": 0, 00:14:20.325 "data_size": 65536 00:14:20.325 }, 00:14:20.325 { 00:14:20.325 "name": "BaseBdev2", 00:14:20.325 "uuid": "ba245a2f-8e37-433e-9826-affcf4abfeda", 00:14:20.325 "is_configured": true, 00:14:20.325 "data_offset": 0, 00:14:20.325 "data_size": 65536 00:14:20.325 }, 00:14:20.325 { 00:14:20.325 "name": "BaseBdev3", 00:14:20.325 "uuid": "ba8754cc-df39-426d-bfd5-b9ca0d94a947", 00:14:20.325 "is_configured": true, 00:14:20.325 "data_offset": 0, 00:14:20.325 "data_size": 65536 00:14:20.325 }, 00:14:20.325 { 00:14:20.325 "name": "BaseBdev4", 00:14:20.325 "uuid": "df1778e8-9de8-4aee-b267-228c0ef89d57", 00:14:20.325 "is_configured": true, 00:14:20.325 "data_offset": 0, 00:14:20.325 "data_size": 65536 00:14:20.325 } 00:14:20.325 ] 00:14:20.325 }' 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.325 12:57:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.585 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:20.585 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:20.585 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.585 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:20.585 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.585 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 [2024-11-26 12:57:38.298701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.846 [2024-11-26 12:57:38.298854] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.846 [2024-11-26 12:57:38.309806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 [2024-11-26 12:57:38.365729] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 [2024-11-26 12:57:38.436310] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:20.846 [2024-11-26 12:57:38.436433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 BaseBdev2 00:14:20.846 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.847 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:20.847 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:20.847 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:20.847 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:20.847 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:20.847 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:20.847 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:20.847 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.847 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.107 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.107 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:21.107 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.107 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.107 [ 00:14:21.107 { 00:14:21.107 "name": "BaseBdev2", 00:14:21.107 "aliases": [ 00:14:21.107 "9e46324e-e0b2-4adc-995e-8fd6a8f897e3" 00:14:21.107 ], 00:14:21.107 "product_name": "Malloc disk", 00:14:21.107 "block_size": 512, 00:14:21.107 "num_blocks": 65536, 00:14:21.107 "uuid": "9e46324e-e0b2-4adc-995e-8fd6a8f897e3", 00:14:21.107 "assigned_rate_limits": { 00:14:21.107 "rw_ios_per_sec": 0, 00:14:21.107 "rw_mbytes_per_sec": 0, 00:14:21.107 "r_mbytes_per_sec": 0, 00:14:21.107 "w_mbytes_per_sec": 0 00:14:21.107 }, 00:14:21.107 "claimed": false, 00:14:21.107 "zoned": false, 00:14:21.107 "supported_io_types": { 00:14:21.107 "read": true, 00:14:21.107 "write": true, 00:14:21.107 "unmap": true, 00:14:21.107 "flush": true, 00:14:21.107 "reset": true, 00:14:21.107 "nvme_admin": false, 00:14:21.107 "nvme_io": false, 00:14:21.107 "nvme_io_md": false, 00:14:21.107 "write_zeroes": true, 00:14:21.107 "zcopy": true, 00:14:21.107 "get_zone_info": false, 00:14:21.108 "zone_management": false, 00:14:21.108 "zone_append": false, 00:14:21.108 "compare": false, 00:14:21.108 "compare_and_write": false, 00:14:21.108 "abort": true, 00:14:21.108 "seek_hole": false, 00:14:21.108 "seek_data": false, 00:14:21.108 "copy": true, 00:14:21.108 "nvme_iov_md": false 00:14:21.108 }, 00:14:21.108 "memory_domains": [ 00:14:21.108 { 00:14:21.108 "dma_device_id": "system", 00:14:21.108 "dma_device_type": 1 00:14:21.108 }, 00:14:21.108 { 00:14:21.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.108 "dma_device_type": 2 00:14:21.108 } 00:14:21.108 ], 00:14:21.108 "driver_specific": {} 00:14:21.108 } 00:14:21.108 ] 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.108 BaseBdev3 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.108 [ 00:14:21.108 { 00:14:21.108 "name": "BaseBdev3", 00:14:21.108 "aliases": [ 00:14:21.108 "0c54b59b-ce68-445a-9172-09d7f9975601" 00:14:21.108 ], 00:14:21.108 "product_name": "Malloc disk", 00:14:21.108 "block_size": 512, 00:14:21.108 "num_blocks": 65536, 00:14:21.108 "uuid": "0c54b59b-ce68-445a-9172-09d7f9975601", 00:14:21.108 "assigned_rate_limits": { 00:14:21.108 "rw_ios_per_sec": 0, 00:14:21.108 "rw_mbytes_per_sec": 0, 00:14:21.108 "r_mbytes_per_sec": 0, 00:14:21.108 "w_mbytes_per_sec": 0 00:14:21.108 }, 00:14:21.108 "claimed": false, 00:14:21.108 "zoned": false, 00:14:21.108 "supported_io_types": { 00:14:21.108 "read": true, 00:14:21.108 "write": true, 00:14:21.108 "unmap": true, 00:14:21.108 "flush": true, 00:14:21.108 "reset": true, 00:14:21.108 "nvme_admin": false, 00:14:21.108 "nvme_io": false, 00:14:21.108 "nvme_io_md": false, 00:14:21.108 "write_zeroes": true, 00:14:21.108 "zcopy": true, 00:14:21.108 "get_zone_info": false, 00:14:21.108 "zone_management": false, 00:14:21.108 "zone_append": false, 00:14:21.108 "compare": false, 00:14:21.108 "compare_and_write": false, 00:14:21.108 "abort": true, 00:14:21.108 "seek_hole": false, 00:14:21.108 "seek_data": false, 00:14:21.108 "copy": true, 00:14:21.108 "nvme_iov_md": false 00:14:21.108 }, 00:14:21.108 "memory_domains": [ 00:14:21.108 { 00:14:21.108 "dma_device_id": "system", 00:14:21.108 "dma_device_type": 1 00:14:21.108 }, 00:14:21.108 { 00:14:21.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.108 "dma_device_type": 2 00:14:21.108 } 00:14:21.108 ], 00:14:21.108 "driver_specific": {} 00:14:21.108 } 00:14:21.108 ] 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.108 BaseBdev4 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.108 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.108 [ 00:14:21.108 { 00:14:21.108 "name": "BaseBdev4", 00:14:21.108 "aliases": [ 00:14:21.108 "946ed0ec-47a1-44eb-a721-94ea913e47ab" 00:14:21.108 ], 00:14:21.108 "product_name": "Malloc disk", 00:14:21.108 "block_size": 512, 00:14:21.108 "num_blocks": 65536, 00:14:21.108 "uuid": "946ed0ec-47a1-44eb-a721-94ea913e47ab", 00:14:21.108 "assigned_rate_limits": { 00:14:21.108 "rw_ios_per_sec": 0, 00:14:21.108 "rw_mbytes_per_sec": 0, 00:14:21.108 "r_mbytes_per_sec": 0, 00:14:21.108 "w_mbytes_per_sec": 0 00:14:21.108 }, 00:14:21.108 "claimed": false, 00:14:21.108 "zoned": false, 00:14:21.108 "supported_io_types": { 00:14:21.108 "read": true, 00:14:21.108 "write": true, 00:14:21.108 "unmap": true, 00:14:21.108 "flush": true, 00:14:21.108 "reset": true, 00:14:21.109 "nvme_admin": false, 00:14:21.109 "nvme_io": false, 00:14:21.109 "nvme_io_md": false, 00:14:21.109 "write_zeroes": true, 00:14:21.109 "zcopy": true, 00:14:21.109 "get_zone_info": false, 00:14:21.109 "zone_management": false, 00:14:21.109 "zone_append": false, 00:14:21.109 "compare": false, 00:14:21.109 "compare_and_write": false, 00:14:21.109 "abort": true, 00:14:21.109 "seek_hole": false, 00:14:21.109 "seek_data": false, 00:14:21.109 "copy": true, 00:14:21.109 "nvme_iov_md": false 00:14:21.109 }, 00:14:21.109 "memory_domains": [ 00:14:21.109 { 00:14:21.109 "dma_device_id": "system", 00:14:21.109 "dma_device_type": 1 00:14:21.109 }, 00:14:21.109 { 00:14:21.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.109 "dma_device_type": 2 00:14:21.109 } 00:14:21.109 ], 00:14:21.109 "driver_specific": {} 00:14:21.109 } 00:14:21.109 ] 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.109 [2024-11-26 12:57:38.662719] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.109 [2024-11-26 12:57:38.662843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.109 [2024-11-26 12:57:38.662882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.109 [2024-11-26 12:57:38.664688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.109 [2024-11-26 12:57:38.664782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.109 "name": "Existed_Raid", 00:14:21.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.109 "strip_size_kb": 64, 00:14:21.109 "state": "configuring", 00:14:21.109 "raid_level": "raid5f", 00:14:21.109 "superblock": false, 00:14:21.109 "num_base_bdevs": 4, 00:14:21.109 "num_base_bdevs_discovered": 3, 00:14:21.109 "num_base_bdevs_operational": 4, 00:14:21.109 "base_bdevs_list": [ 00:14:21.109 { 00:14:21.109 "name": "BaseBdev1", 00:14:21.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.109 "is_configured": false, 00:14:21.109 "data_offset": 0, 00:14:21.109 "data_size": 0 00:14:21.109 }, 00:14:21.109 { 00:14:21.109 "name": "BaseBdev2", 00:14:21.109 "uuid": "9e46324e-e0b2-4adc-995e-8fd6a8f897e3", 00:14:21.109 "is_configured": true, 00:14:21.109 "data_offset": 0, 00:14:21.109 "data_size": 65536 00:14:21.109 }, 00:14:21.109 { 00:14:21.109 "name": "BaseBdev3", 00:14:21.109 "uuid": "0c54b59b-ce68-445a-9172-09d7f9975601", 00:14:21.109 "is_configured": true, 00:14:21.109 "data_offset": 0, 00:14:21.109 "data_size": 65536 00:14:21.109 }, 00:14:21.109 { 00:14:21.109 "name": "BaseBdev4", 00:14:21.109 "uuid": "946ed0ec-47a1-44eb-a721-94ea913e47ab", 00:14:21.109 "is_configured": true, 00:14:21.109 "data_offset": 0, 00:14:21.109 "data_size": 65536 00:14:21.109 } 00:14:21.109 ] 00:14:21.109 }' 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.109 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.682 [2024-11-26 12:57:39.145891] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.682 "name": "Existed_Raid", 00:14:21.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.682 "strip_size_kb": 64, 00:14:21.682 "state": "configuring", 00:14:21.682 "raid_level": "raid5f", 00:14:21.682 "superblock": false, 00:14:21.682 "num_base_bdevs": 4, 00:14:21.682 "num_base_bdevs_discovered": 2, 00:14:21.682 "num_base_bdevs_operational": 4, 00:14:21.682 "base_bdevs_list": [ 00:14:21.682 { 00:14:21.682 "name": "BaseBdev1", 00:14:21.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.682 "is_configured": false, 00:14:21.682 "data_offset": 0, 00:14:21.682 "data_size": 0 00:14:21.682 }, 00:14:21.682 { 00:14:21.682 "name": null, 00:14:21.682 "uuid": "9e46324e-e0b2-4adc-995e-8fd6a8f897e3", 00:14:21.682 "is_configured": false, 00:14:21.682 "data_offset": 0, 00:14:21.682 "data_size": 65536 00:14:21.682 }, 00:14:21.682 { 00:14:21.682 "name": "BaseBdev3", 00:14:21.682 "uuid": "0c54b59b-ce68-445a-9172-09d7f9975601", 00:14:21.682 "is_configured": true, 00:14:21.682 "data_offset": 0, 00:14:21.682 "data_size": 65536 00:14:21.682 }, 00:14:21.682 { 00:14:21.682 "name": "BaseBdev4", 00:14:21.682 "uuid": "946ed0ec-47a1-44eb-a721-94ea913e47ab", 00:14:21.682 "is_configured": true, 00:14:21.682 "data_offset": 0, 00:14:21.682 "data_size": 65536 00:14:21.682 } 00:14:21.682 ] 00:14:21.682 }' 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.682 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.252 [2024-11-26 12:57:39.707872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.252 BaseBdev1 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.252 [ 00:14:22.252 { 00:14:22.252 "name": "BaseBdev1", 00:14:22.252 "aliases": [ 00:14:22.252 "bf0a6192-f9d0-46f0-8b3a-487305d6801a" 00:14:22.252 ], 00:14:22.252 "product_name": "Malloc disk", 00:14:22.252 "block_size": 512, 00:14:22.252 "num_blocks": 65536, 00:14:22.252 "uuid": "bf0a6192-f9d0-46f0-8b3a-487305d6801a", 00:14:22.252 "assigned_rate_limits": { 00:14:22.252 "rw_ios_per_sec": 0, 00:14:22.252 "rw_mbytes_per_sec": 0, 00:14:22.252 "r_mbytes_per_sec": 0, 00:14:22.252 "w_mbytes_per_sec": 0 00:14:22.252 }, 00:14:22.252 "claimed": true, 00:14:22.252 "claim_type": "exclusive_write", 00:14:22.252 "zoned": false, 00:14:22.252 "supported_io_types": { 00:14:22.252 "read": true, 00:14:22.252 "write": true, 00:14:22.252 "unmap": true, 00:14:22.252 "flush": true, 00:14:22.252 "reset": true, 00:14:22.252 "nvme_admin": false, 00:14:22.252 "nvme_io": false, 00:14:22.252 "nvme_io_md": false, 00:14:22.252 "write_zeroes": true, 00:14:22.252 "zcopy": true, 00:14:22.252 "get_zone_info": false, 00:14:22.252 "zone_management": false, 00:14:22.252 "zone_append": false, 00:14:22.252 "compare": false, 00:14:22.252 "compare_and_write": false, 00:14:22.252 "abort": true, 00:14:22.252 "seek_hole": false, 00:14:22.252 "seek_data": false, 00:14:22.252 "copy": true, 00:14:22.252 "nvme_iov_md": false 00:14:22.252 }, 00:14:22.252 "memory_domains": [ 00:14:22.252 { 00:14:22.252 "dma_device_id": "system", 00:14:22.252 "dma_device_type": 1 00:14:22.252 }, 00:14:22.252 { 00:14:22.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.252 "dma_device_type": 2 00:14:22.252 } 00:14:22.252 ], 00:14:22.252 "driver_specific": {} 00:14:22.252 } 00:14:22.252 ] 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.252 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.253 "name": "Existed_Raid", 00:14:22.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.253 "strip_size_kb": 64, 00:14:22.253 "state": "configuring", 00:14:22.253 "raid_level": "raid5f", 00:14:22.253 "superblock": false, 00:14:22.253 "num_base_bdevs": 4, 00:14:22.253 "num_base_bdevs_discovered": 3, 00:14:22.253 "num_base_bdevs_operational": 4, 00:14:22.253 "base_bdevs_list": [ 00:14:22.253 { 00:14:22.253 "name": "BaseBdev1", 00:14:22.253 "uuid": "bf0a6192-f9d0-46f0-8b3a-487305d6801a", 00:14:22.253 "is_configured": true, 00:14:22.253 "data_offset": 0, 00:14:22.253 "data_size": 65536 00:14:22.253 }, 00:14:22.253 { 00:14:22.253 "name": null, 00:14:22.253 "uuid": "9e46324e-e0b2-4adc-995e-8fd6a8f897e3", 00:14:22.253 "is_configured": false, 00:14:22.253 "data_offset": 0, 00:14:22.253 "data_size": 65536 00:14:22.253 }, 00:14:22.253 { 00:14:22.253 "name": "BaseBdev3", 00:14:22.253 "uuid": "0c54b59b-ce68-445a-9172-09d7f9975601", 00:14:22.253 "is_configured": true, 00:14:22.253 "data_offset": 0, 00:14:22.253 "data_size": 65536 00:14:22.253 }, 00:14:22.253 { 00:14:22.253 "name": "BaseBdev4", 00:14:22.253 "uuid": "946ed0ec-47a1-44eb-a721-94ea913e47ab", 00:14:22.253 "is_configured": true, 00:14:22.253 "data_offset": 0, 00:14:22.253 "data_size": 65536 00:14:22.253 } 00:14:22.253 ] 00:14:22.253 }' 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.253 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.512 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.512 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:22.512 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.512 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.772 [2024-11-26 12:57:40.223069] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.772 "name": "Existed_Raid", 00:14:22.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.772 "strip_size_kb": 64, 00:14:22.772 "state": "configuring", 00:14:22.772 "raid_level": "raid5f", 00:14:22.772 "superblock": false, 00:14:22.772 "num_base_bdevs": 4, 00:14:22.772 "num_base_bdevs_discovered": 2, 00:14:22.772 "num_base_bdevs_operational": 4, 00:14:22.772 "base_bdevs_list": [ 00:14:22.772 { 00:14:22.772 "name": "BaseBdev1", 00:14:22.772 "uuid": "bf0a6192-f9d0-46f0-8b3a-487305d6801a", 00:14:22.772 "is_configured": true, 00:14:22.772 "data_offset": 0, 00:14:22.772 "data_size": 65536 00:14:22.772 }, 00:14:22.772 { 00:14:22.772 "name": null, 00:14:22.772 "uuid": "9e46324e-e0b2-4adc-995e-8fd6a8f897e3", 00:14:22.772 "is_configured": false, 00:14:22.772 "data_offset": 0, 00:14:22.772 "data_size": 65536 00:14:22.772 }, 00:14:22.772 { 00:14:22.772 "name": null, 00:14:22.772 "uuid": "0c54b59b-ce68-445a-9172-09d7f9975601", 00:14:22.772 "is_configured": false, 00:14:22.772 "data_offset": 0, 00:14:22.772 "data_size": 65536 00:14:22.772 }, 00:14:22.772 { 00:14:22.772 "name": "BaseBdev4", 00:14:22.772 "uuid": "946ed0ec-47a1-44eb-a721-94ea913e47ab", 00:14:22.772 "is_configured": true, 00:14:22.772 "data_offset": 0, 00:14:22.772 "data_size": 65536 00:14:22.772 } 00:14:22.772 ] 00:14:22.772 }' 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.772 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.032 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.032 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.032 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:23.032 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.032 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.291 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:23.291 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:23.291 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.291 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.291 [2024-11-26 12:57:40.734257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.291 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.291 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:23.291 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.291 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.291 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.292 "name": "Existed_Raid", 00:14:23.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.292 "strip_size_kb": 64, 00:14:23.292 "state": "configuring", 00:14:23.292 "raid_level": "raid5f", 00:14:23.292 "superblock": false, 00:14:23.292 "num_base_bdevs": 4, 00:14:23.292 "num_base_bdevs_discovered": 3, 00:14:23.292 "num_base_bdevs_operational": 4, 00:14:23.292 "base_bdevs_list": [ 00:14:23.292 { 00:14:23.292 "name": "BaseBdev1", 00:14:23.292 "uuid": "bf0a6192-f9d0-46f0-8b3a-487305d6801a", 00:14:23.292 "is_configured": true, 00:14:23.292 "data_offset": 0, 00:14:23.292 "data_size": 65536 00:14:23.292 }, 00:14:23.292 { 00:14:23.292 "name": null, 00:14:23.292 "uuid": "9e46324e-e0b2-4adc-995e-8fd6a8f897e3", 00:14:23.292 "is_configured": false, 00:14:23.292 "data_offset": 0, 00:14:23.292 "data_size": 65536 00:14:23.292 }, 00:14:23.292 { 00:14:23.292 "name": "BaseBdev3", 00:14:23.292 "uuid": "0c54b59b-ce68-445a-9172-09d7f9975601", 00:14:23.292 "is_configured": true, 00:14:23.292 "data_offset": 0, 00:14:23.292 "data_size": 65536 00:14:23.292 }, 00:14:23.292 { 00:14:23.292 "name": "BaseBdev4", 00:14:23.292 "uuid": "946ed0ec-47a1-44eb-a721-94ea913e47ab", 00:14:23.292 "is_configured": true, 00:14:23.292 "data_offset": 0, 00:14:23.292 "data_size": 65536 00:14:23.292 } 00:14:23.292 ] 00:14:23.292 }' 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.292 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.553 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:23.553 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.553 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.553 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.553 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.813 [2024-11-26 12:57:41.237380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.813 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.813 "name": "Existed_Raid", 00:14:23.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.813 "strip_size_kb": 64, 00:14:23.813 "state": "configuring", 00:14:23.813 "raid_level": "raid5f", 00:14:23.813 "superblock": false, 00:14:23.813 "num_base_bdevs": 4, 00:14:23.813 "num_base_bdevs_discovered": 2, 00:14:23.813 "num_base_bdevs_operational": 4, 00:14:23.813 "base_bdevs_list": [ 00:14:23.813 { 00:14:23.813 "name": null, 00:14:23.813 "uuid": "bf0a6192-f9d0-46f0-8b3a-487305d6801a", 00:14:23.813 "is_configured": false, 00:14:23.813 "data_offset": 0, 00:14:23.813 "data_size": 65536 00:14:23.813 }, 00:14:23.813 { 00:14:23.813 "name": null, 00:14:23.813 "uuid": "9e46324e-e0b2-4adc-995e-8fd6a8f897e3", 00:14:23.813 "is_configured": false, 00:14:23.813 "data_offset": 0, 00:14:23.813 "data_size": 65536 00:14:23.813 }, 00:14:23.813 { 00:14:23.813 "name": "BaseBdev3", 00:14:23.813 "uuid": "0c54b59b-ce68-445a-9172-09d7f9975601", 00:14:23.813 "is_configured": true, 00:14:23.813 "data_offset": 0, 00:14:23.813 "data_size": 65536 00:14:23.813 }, 00:14:23.813 { 00:14:23.813 "name": "BaseBdev4", 00:14:23.813 "uuid": "946ed0ec-47a1-44eb-a721-94ea913e47ab", 00:14:23.814 "is_configured": true, 00:14:23.814 "data_offset": 0, 00:14:23.814 "data_size": 65536 00:14:23.814 } 00:14:23.814 ] 00:14:23.814 }' 00:14:23.814 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.814 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.073 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.073 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.073 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.073 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.073 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.333 [2024-11-26 12:57:41.763235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.333 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.334 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.334 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.334 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.334 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.334 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.334 "name": "Existed_Raid", 00:14:24.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.334 "strip_size_kb": 64, 00:14:24.334 "state": "configuring", 00:14:24.334 "raid_level": "raid5f", 00:14:24.334 "superblock": false, 00:14:24.334 "num_base_bdevs": 4, 00:14:24.334 "num_base_bdevs_discovered": 3, 00:14:24.334 "num_base_bdevs_operational": 4, 00:14:24.334 "base_bdevs_list": [ 00:14:24.334 { 00:14:24.334 "name": null, 00:14:24.334 "uuid": "bf0a6192-f9d0-46f0-8b3a-487305d6801a", 00:14:24.334 "is_configured": false, 00:14:24.334 "data_offset": 0, 00:14:24.334 "data_size": 65536 00:14:24.334 }, 00:14:24.334 { 00:14:24.334 "name": "BaseBdev2", 00:14:24.334 "uuid": "9e46324e-e0b2-4adc-995e-8fd6a8f897e3", 00:14:24.334 "is_configured": true, 00:14:24.334 "data_offset": 0, 00:14:24.334 "data_size": 65536 00:14:24.334 }, 00:14:24.334 { 00:14:24.334 "name": "BaseBdev3", 00:14:24.334 "uuid": "0c54b59b-ce68-445a-9172-09d7f9975601", 00:14:24.334 "is_configured": true, 00:14:24.334 "data_offset": 0, 00:14:24.334 "data_size": 65536 00:14:24.334 }, 00:14:24.334 { 00:14:24.334 "name": "BaseBdev4", 00:14:24.334 "uuid": "946ed0ec-47a1-44eb-a721-94ea913e47ab", 00:14:24.334 "is_configured": true, 00:14:24.334 "data_offset": 0, 00:14:24.334 "data_size": 65536 00:14:24.334 } 00:14:24.334 ] 00:14:24.334 }' 00:14:24.334 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.334 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:24.594 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bf0a6192-f9d0-46f0-8b3a-487305d6801a 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.855 [2024-11-26 12:57:42.284258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:24.855 [2024-11-26 12:57:42.284303] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:24.855 [2024-11-26 12:57:42.284311] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:24.855 [2024-11-26 12:57:42.284547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:24.855 [2024-11-26 12:57:42.284972] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:24.855 [2024-11-26 12:57:42.284985] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:24.855 [2024-11-26 12:57:42.285150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.855 NewBaseBdev 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.855 [ 00:14:24.855 { 00:14:24.855 "name": "NewBaseBdev", 00:14:24.855 "aliases": [ 00:14:24.855 "bf0a6192-f9d0-46f0-8b3a-487305d6801a" 00:14:24.855 ], 00:14:24.855 "product_name": "Malloc disk", 00:14:24.855 "block_size": 512, 00:14:24.855 "num_blocks": 65536, 00:14:24.855 "uuid": "bf0a6192-f9d0-46f0-8b3a-487305d6801a", 00:14:24.855 "assigned_rate_limits": { 00:14:24.855 "rw_ios_per_sec": 0, 00:14:24.855 "rw_mbytes_per_sec": 0, 00:14:24.855 "r_mbytes_per_sec": 0, 00:14:24.855 "w_mbytes_per_sec": 0 00:14:24.855 }, 00:14:24.855 "claimed": true, 00:14:24.855 "claim_type": "exclusive_write", 00:14:24.855 "zoned": false, 00:14:24.855 "supported_io_types": { 00:14:24.855 "read": true, 00:14:24.855 "write": true, 00:14:24.855 "unmap": true, 00:14:24.855 "flush": true, 00:14:24.855 "reset": true, 00:14:24.855 "nvme_admin": false, 00:14:24.855 "nvme_io": false, 00:14:24.855 "nvme_io_md": false, 00:14:24.855 "write_zeroes": true, 00:14:24.855 "zcopy": true, 00:14:24.855 "get_zone_info": false, 00:14:24.855 "zone_management": false, 00:14:24.855 "zone_append": false, 00:14:24.855 "compare": false, 00:14:24.855 "compare_and_write": false, 00:14:24.855 "abort": true, 00:14:24.855 "seek_hole": false, 00:14:24.855 "seek_data": false, 00:14:24.855 "copy": true, 00:14:24.855 "nvme_iov_md": false 00:14:24.855 }, 00:14:24.855 "memory_domains": [ 00:14:24.855 { 00:14:24.855 "dma_device_id": "system", 00:14:24.855 "dma_device_type": 1 00:14:24.855 }, 00:14:24.855 { 00:14:24.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.855 "dma_device_type": 2 00:14:24.855 } 00:14:24.855 ], 00:14:24.855 "driver_specific": {} 00:14:24.855 } 00:14:24.855 ] 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.855 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.856 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.856 "name": "Existed_Raid", 00:14:24.856 "uuid": "dcc2abe0-06b0-4c64-80d6-48bde4e46bb8", 00:14:24.856 "strip_size_kb": 64, 00:14:24.856 "state": "online", 00:14:24.856 "raid_level": "raid5f", 00:14:24.856 "superblock": false, 00:14:24.856 "num_base_bdevs": 4, 00:14:24.856 "num_base_bdevs_discovered": 4, 00:14:24.856 "num_base_bdevs_operational": 4, 00:14:24.856 "base_bdevs_list": [ 00:14:24.856 { 00:14:24.856 "name": "NewBaseBdev", 00:14:24.856 "uuid": "bf0a6192-f9d0-46f0-8b3a-487305d6801a", 00:14:24.856 "is_configured": true, 00:14:24.856 "data_offset": 0, 00:14:24.856 "data_size": 65536 00:14:24.856 }, 00:14:24.856 { 00:14:24.856 "name": "BaseBdev2", 00:14:24.856 "uuid": "9e46324e-e0b2-4adc-995e-8fd6a8f897e3", 00:14:24.856 "is_configured": true, 00:14:24.856 "data_offset": 0, 00:14:24.856 "data_size": 65536 00:14:24.856 }, 00:14:24.856 { 00:14:24.856 "name": "BaseBdev3", 00:14:24.856 "uuid": "0c54b59b-ce68-445a-9172-09d7f9975601", 00:14:24.856 "is_configured": true, 00:14:24.856 "data_offset": 0, 00:14:24.856 "data_size": 65536 00:14:24.856 }, 00:14:24.856 { 00:14:24.856 "name": "BaseBdev4", 00:14:24.856 "uuid": "946ed0ec-47a1-44eb-a721-94ea913e47ab", 00:14:24.856 "is_configured": true, 00:14:24.856 "data_offset": 0, 00:14:24.856 "data_size": 65536 00:14:24.856 } 00:14:24.856 ] 00:14:24.856 }' 00:14:24.856 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.856 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.115 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:25.115 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:25.115 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:25.115 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:25.115 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:25.115 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:25.115 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:25.115 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:25.115 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.115 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.115 [2024-11-26 12:57:42.787697] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.375 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.375 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:25.375 "name": "Existed_Raid", 00:14:25.375 "aliases": [ 00:14:25.375 "dcc2abe0-06b0-4c64-80d6-48bde4e46bb8" 00:14:25.375 ], 00:14:25.375 "product_name": "Raid Volume", 00:14:25.375 "block_size": 512, 00:14:25.375 "num_blocks": 196608, 00:14:25.375 "uuid": "dcc2abe0-06b0-4c64-80d6-48bde4e46bb8", 00:14:25.375 "assigned_rate_limits": { 00:14:25.375 "rw_ios_per_sec": 0, 00:14:25.375 "rw_mbytes_per_sec": 0, 00:14:25.375 "r_mbytes_per_sec": 0, 00:14:25.375 "w_mbytes_per_sec": 0 00:14:25.375 }, 00:14:25.375 "claimed": false, 00:14:25.375 "zoned": false, 00:14:25.375 "supported_io_types": { 00:14:25.375 "read": true, 00:14:25.375 "write": true, 00:14:25.375 "unmap": false, 00:14:25.375 "flush": false, 00:14:25.375 "reset": true, 00:14:25.375 "nvme_admin": false, 00:14:25.375 "nvme_io": false, 00:14:25.375 "nvme_io_md": false, 00:14:25.376 "write_zeroes": true, 00:14:25.376 "zcopy": false, 00:14:25.376 "get_zone_info": false, 00:14:25.376 "zone_management": false, 00:14:25.376 "zone_append": false, 00:14:25.376 "compare": false, 00:14:25.376 "compare_and_write": false, 00:14:25.376 "abort": false, 00:14:25.376 "seek_hole": false, 00:14:25.376 "seek_data": false, 00:14:25.376 "copy": false, 00:14:25.376 "nvme_iov_md": false 00:14:25.376 }, 00:14:25.376 "driver_specific": { 00:14:25.376 "raid": { 00:14:25.376 "uuid": "dcc2abe0-06b0-4c64-80d6-48bde4e46bb8", 00:14:25.376 "strip_size_kb": 64, 00:14:25.376 "state": "online", 00:14:25.376 "raid_level": "raid5f", 00:14:25.376 "superblock": false, 00:14:25.376 "num_base_bdevs": 4, 00:14:25.376 "num_base_bdevs_discovered": 4, 00:14:25.376 "num_base_bdevs_operational": 4, 00:14:25.376 "base_bdevs_list": [ 00:14:25.376 { 00:14:25.376 "name": "NewBaseBdev", 00:14:25.376 "uuid": "bf0a6192-f9d0-46f0-8b3a-487305d6801a", 00:14:25.376 "is_configured": true, 00:14:25.376 "data_offset": 0, 00:14:25.376 "data_size": 65536 00:14:25.376 }, 00:14:25.376 { 00:14:25.376 "name": "BaseBdev2", 00:14:25.376 "uuid": "9e46324e-e0b2-4adc-995e-8fd6a8f897e3", 00:14:25.376 "is_configured": true, 00:14:25.376 "data_offset": 0, 00:14:25.376 "data_size": 65536 00:14:25.376 }, 00:14:25.376 { 00:14:25.376 "name": "BaseBdev3", 00:14:25.376 "uuid": "0c54b59b-ce68-445a-9172-09d7f9975601", 00:14:25.376 "is_configured": true, 00:14:25.376 "data_offset": 0, 00:14:25.376 "data_size": 65536 00:14:25.376 }, 00:14:25.376 { 00:14:25.376 "name": "BaseBdev4", 00:14:25.376 "uuid": "946ed0ec-47a1-44eb-a721-94ea913e47ab", 00:14:25.376 "is_configured": true, 00:14:25.376 "data_offset": 0, 00:14:25.376 "data_size": 65536 00:14:25.376 } 00:14:25.376 ] 00:14:25.376 } 00:14:25.376 } 00:14:25.376 }' 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:25.376 BaseBdev2 00:14:25.376 BaseBdev3 00:14:25.376 BaseBdev4' 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.376 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.376 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.637 [2024-11-26 12:57:43.086980] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.637 [2024-11-26 12:57:43.087058] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.637 [2024-11-26 12:57:43.087137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.637 [2024-11-26 12:57:43.087413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.637 [2024-11-26 12:57:43.087431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93478 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93478 ']' 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93478 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93478 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:25.637 killing process with pid 93478 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93478' 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93478 00:14:25.637 [2024-11-26 12:57:43.142029] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.637 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93478 00:14:25.637 [2024-11-26 12:57:43.182242] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:25.898 00:14:25.898 real 0m9.944s 00:14:25.898 user 0m16.970s 00:14:25.898 sys 0m2.143s 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.898 ************************************ 00:14:25.898 END TEST raid5f_state_function_test 00:14:25.898 ************************************ 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.898 12:57:43 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:25.898 12:57:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:25.898 12:57:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.898 12:57:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.898 ************************************ 00:14:25.898 START TEST raid5f_state_function_test_sb 00:14:25.898 ************************************ 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94134 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:25.898 Process raid pid: 94134 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94134' 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94134 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94134 ']' 00:14:25.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.898 12:57:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.899 12:57:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.159 [2024-11-26 12:57:43.608471] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:26.159 [2024-11-26 12:57:43.608616] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.159 [2024-11-26 12:57:43.774613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.159 [2024-11-26 12:57:43.822606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.419 [2024-11-26 12:57:43.866099] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.419 [2024-11-26 12:57:43.866136] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 [2024-11-26 12:57:44.419789] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.990 [2024-11-26 12:57:44.419842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.990 [2024-11-26 12:57:44.419853] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.990 [2024-11-26 12:57:44.419863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.990 [2024-11-26 12:57:44.419869] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.990 [2024-11-26 12:57:44.419879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.990 [2024-11-26 12:57:44.419885] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:26.990 [2024-11-26 12:57:44.419893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.990 "name": "Existed_Raid", 00:14:26.990 "uuid": "084d26b0-210c-432b-9acf-53362641b683", 00:14:26.990 "strip_size_kb": 64, 00:14:26.990 "state": "configuring", 00:14:26.990 "raid_level": "raid5f", 00:14:26.990 "superblock": true, 00:14:26.990 "num_base_bdevs": 4, 00:14:26.990 "num_base_bdevs_discovered": 0, 00:14:26.990 "num_base_bdevs_operational": 4, 00:14:26.990 "base_bdevs_list": [ 00:14:26.990 { 00:14:26.990 "name": "BaseBdev1", 00:14:26.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.990 "is_configured": false, 00:14:26.990 "data_offset": 0, 00:14:26.990 "data_size": 0 00:14:26.990 }, 00:14:26.990 { 00:14:26.990 "name": "BaseBdev2", 00:14:26.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.990 "is_configured": false, 00:14:26.990 "data_offset": 0, 00:14:26.990 "data_size": 0 00:14:26.990 }, 00:14:26.990 { 00:14:26.990 "name": "BaseBdev3", 00:14:26.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.990 "is_configured": false, 00:14:26.990 "data_offset": 0, 00:14:26.990 "data_size": 0 00:14:26.990 }, 00:14:26.990 { 00:14:26.990 "name": "BaseBdev4", 00:14:26.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.990 "is_configured": false, 00:14:26.990 "data_offset": 0, 00:14:26.990 "data_size": 0 00:14:26.990 } 00:14:26.990 ] 00:14:26.990 }' 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.990 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.250 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.250 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.250 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.250 [2024-11-26 12:57:44.894807] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.250 [2024-11-26 12:57:44.894913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:27.250 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.250 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:27.250 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.250 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.250 [2024-11-26 12:57:44.906839] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:27.250 [2024-11-26 12:57:44.906919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:27.250 [2024-11-26 12:57:44.906944] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.250 [2024-11-26 12:57:44.906965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.250 [2024-11-26 12:57:44.906981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.250 [2024-11-26 12:57:44.906999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.250 [2024-11-26 12:57:44.907015] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:27.250 [2024-11-26 12:57:44.907049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:27.250 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.251 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:27.251 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.251 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.251 [2024-11-26 12:57:44.927811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.511 BaseBdev1 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.511 [ 00:14:27.511 { 00:14:27.511 "name": "BaseBdev1", 00:14:27.511 "aliases": [ 00:14:27.511 "22a37eec-ed50-4a10-b1e9-f77e5564e027" 00:14:27.511 ], 00:14:27.511 "product_name": "Malloc disk", 00:14:27.511 "block_size": 512, 00:14:27.511 "num_blocks": 65536, 00:14:27.511 "uuid": "22a37eec-ed50-4a10-b1e9-f77e5564e027", 00:14:27.511 "assigned_rate_limits": { 00:14:27.511 "rw_ios_per_sec": 0, 00:14:27.511 "rw_mbytes_per_sec": 0, 00:14:27.511 "r_mbytes_per_sec": 0, 00:14:27.511 "w_mbytes_per_sec": 0 00:14:27.511 }, 00:14:27.511 "claimed": true, 00:14:27.511 "claim_type": "exclusive_write", 00:14:27.511 "zoned": false, 00:14:27.511 "supported_io_types": { 00:14:27.511 "read": true, 00:14:27.511 "write": true, 00:14:27.511 "unmap": true, 00:14:27.511 "flush": true, 00:14:27.511 "reset": true, 00:14:27.511 "nvme_admin": false, 00:14:27.511 "nvme_io": false, 00:14:27.511 "nvme_io_md": false, 00:14:27.511 "write_zeroes": true, 00:14:27.511 "zcopy": true, 00:14:27.511 "get_zone_info": false, 00:14:27.511 "zone_management": false, 00:14:27.511 "zone_append": false, 00:14:27.511 "compare": false, 00:14:27.511 "compare_and_write": false, 00:14:27.511 "abort": true, 00:14:27.511 "seek_hole": false, 00:14:27.511 "seek_data": false, 00:14:27.511 "copy": true, 00:14:27.511 "nvme_iov_md": false 00:14:27.511 }, 00:14:27.511 "memory_domains": [ 00:14:27.511 { 00:14:27.511 "dma_device_id": "system", 00:14:27.511 "dma_device_type": 1 00:14:27.511 }, 00:14:27.511 { 00:14:27.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.511 "dma_device_type": 2 00:14:27.511 } 00:14:27.511 ], 00:14:27.511 "driver_specific": {} 00:14:27.511 } 00:14:27.511 ] 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.511 12:57:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.511 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.511 "name": "Existed_Raid", 00:14:27.511 "uuid": "b69a180e-7f62-43f5-86bd-d15531831bf2", 00:14:27.511 "strip_size_kb": 64, 00:14:27.511 "state": "configuring", 00:14:27.511 "raid_level": "raid5f", 00:14:27.511 "superblock": true, 00:14:27.511 "num_base_bdevs": 4, 00:14:27.511 "num_base_bdevs_discovered": 1, 00:14:27.511 "num_base_bdevs_operational": 4, 00:14:27.511 "base_bdevs_list": [ 00:14:27.511 { 00:14:27.511 "name": "BaseBdev1", 00:14:27.511 "uuid": "22a37eec-ed50-4a10-b1e9-f77e5564e027", 00:14:27.511 "is_configured": true, 00:14:27.511 "data_offset": 2048, 00:14:27.511 "data_size": 63488 00:14:27.511 }, 00:14:27.511 { 00:14:27.511 "name": "BaseBdev2", 00:14:27.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.511 "is_configured": false, 00:14:27.511 "data_offset": 0, 00:14:27.511 "data_size": 0 00:14:27.511 }, 00:14:27.511 { 00:14:27.511 "name": "BaseBdev3", 00:14:27.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.511 "is_configured": false, 00:14:27.511 "data_offset": 0, 00:14:27.511 "data_size": 0 00:14:27.511 }, 00:14:27.511 { 00:14:27.511 "name": "BaseBdev4", 00:14:27.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.511 "is_configured": false, 00:14:27.511 "data_offset": 0, 00:14:27.511 "data_size": 0 00:14:27.511 } 00:14:27.511 ] 00:14:27.511 }' 00:14:27.511 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.511 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.771 [2024-11-26 12:57:45.391101] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.771 [2024-11-26 12:57:45.391210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.771 [2024-11-26 12:57:45.403130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.771 [2024-11-26 12:57:45.404898] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.771 [2024-11-26 12:57:45.404970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.771 [2024-11-26 12:57:45.404995] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.771 [2024-11-26 12:57:45.405016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.771 [2024-11-26 12:57:45.405032] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:27.771 [2024-11-26 12:57:45.405050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.771 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.031 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.031 "name": "Existed_Raid", 00:14:28.031 "uuid": "e068cb74-94ed-4132-a9f9-3fe0e1b92be5", 00:14:28.031 "strip_size_kb": 64, 00:14:28.031 "state": "configuring", 00:14:28.031 "raid_level": "raid5f", 00:14:28.031 "superblock": true, 00:14:28.031 "num_base_bdevs": 4, 00:14:28.031 "num_base_bdevs_discovered": 1, 00:14:28.031 "num_base_bdevs_operational": 4, 00:14:28.031 "base_bdevs_list": [ 00:14:28.031 { 00:14:28.031 "name": "BaseBdev1", 00:14:28.031 "uuid": "22a37eec-ed50-4a10-b1e9-f77e5564e027", 00:14:28.031 "is_configured": true, 00:14:28.031 "data_offset": 2048, 00:14:28.031 "data_size": 63488 00:14:28.031 }, 00:14:28.031 { 00:14:28.031 "name": "BaseBdev2", 00:14:28.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.031 "is_configured": false, 00:14:28.031 "data_offset": 0, 00:14:28.031 "data_size": 0 00:14:28.031 }, 00:14:28.031 { 00:14:28.031 "name": "BaseBdev3", 00:14:28.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.031 "is_configured": false, 00:14:28.031 "data_offset": 0, 00:14:28.031 "data_size": 0 00:14:28.031 }, 00:14:28.031 { 00:14:28.031 "name": "BaseBdev4", 00:14:28.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.031 "is_configured": false, 00:14:28.031 "data_offset": 0, 00:14:28.031 "data_size": 0 00:14:28.031 } 00:14:28.031 ] 00:14:28.031 }' 00:14:28.031 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.031 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.291 [2024-11-26 12:57:45.894819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.291 BaseBdev2 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.291 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.291 [ 00:14:28.291 { 00:14:28.291 "name": "BaseBdev2", 00:14:28.291 "aliases": [ 00:14:28.291 "0bf61b97-ae71-454a-a8dd-a3e3add53ca9" 00:14:28.291 ], 00:14:28.291 "product_name": "Malloc disk", 00:14:28.291 "block_size": 512, 00:14:28.291 "num_blocks": 65536, 00:14:28.291 "uuid": "0bf61b97-ae71-454a-a8dd-a3e3add53ca9", 00:14:28.291 "assigned_rate_limits": { 00:14:28.291 "rw_ios_per_sec": 0, 00:14:28.291 "rw_mbytes_per_sec": 0, 00:14:28.291 "r_mbytes_per_sec": 0, 00:14:28.291 "w_mbytes_per_sec": 0 00:14:28.291 }, 00:14:28.291 "claimed": true, 00:14:28.291 "claim_type": "exclusive_write", 00:14:28.291 "zoned": false, 00:14:28.291 "supported_io_types": { 00:14:28.291 "read": true, 00:14:28.291 "write": true, 00:14:28.291 "unmap": true, 00:14:28.291 "flush": true, 00:14:28.291 "reset": true, 00:14:28.291 "nvme_admin": false, 00:14:28.291 "nvme_io": false, 00:14:28.291 "nvme_io_md": false, 00:14:28.291 "write_zeroes": true, 00:14:28.291 "zcopy": true, 00:14:28.291 "get_zone_info": false, 00:14:28.291 "zone_management": false, 00:14:28.291 "zone_append": false, 00:14:28.291 "compare": false, 00:14:28.292 "compare_and_write": false, 00:14:28.292 "abort": true, 00:14:28.292 "seek_hole": false, 00:14:28.292 "seek_data": false, 00:14:28.292 "copy": true, 00:14:28.292 "nvme_iov_md": false 00:14:28.292 }, 00:14:28.292 "memory_domains": [ 00:14:28.292 { 00:14:28.292 "dma_device_id": "system", 00:14:28.292 "dma_device_type": 1 00:14:28.292 }, 00:14:28.292 { 00:14:28.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.292 "dma_device_type": 2 00:14:28.292 } 00:14:28.292 ], 00:14:28.292 "driver_specific": {} 00:14:28.292 } 00:14:28.292 ] 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.292 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.559 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.559 "name": "Existed_Raid", 00:14:28.559 "uuid": "e068cb74-94ed-4132-a9f9-3fe0e1b92be5", 00:14:28.559 "strip_size_kb": 64, 00:14:28.559 "state": "configuring", 00:14:28.559 "raid_level": "raid5f", 00:14:28.559 "superblock": true, 00:14:28.559 "num_base_bdevs": 4, 00:14:28.559 "num_base_bdevs_discovered": 2, 00:14:28.559 "num_base_bdevs_operational": 4, 00:14:28.559 "base_bdevs_list": [ 00:14:28.559 { 00:14:28.559 "name": "BaseBdev1", 00:14:28.559 "uuid": "22a37eec-ed50-4a10-b1e9-f77e5564e027", 00:14:28.559 "is_configured": true, 00:14:28.559 "data_offset": 2048, 00:14:28.559 "data_size": 63488 00:14:28.559 }, 00:14:28.559 { 00:14:28.559 "name": "BaseBdev2", 00:14:28.559 "uuid": "0bf61b97-ae71-454a-a8dd-a3e3add53ca9", 00:14:28.559 "is_configured": true, 00:14:28.559 "data_offset": 2048, 00:14:28.559 "data_size": 63488 00:14:28.559 }, 00:14:28.559 { 00:14:28.559 "name": "BaseBdev3", 00:14:28.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.559 "is_configured": false, 00:14:28.559 "data_offset": 0, 00:14:28.559 "data_size": 0 00:14:28.559 }, 00:14:28.559 { 00:14:28.559 "name": "BaseBdev4", 00:14:28.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.559 "is_configured": false, 00:14:28.559 "data_offset": 0, 00:14:28.559 "data_size": 0 00:14:28.559 } 00:14:28.559 ] 00:14:28.559 }' 00:14:28.559 12:57:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.559 12:57:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.836 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:28.836 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.836 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.837 [2024-11-26 12:57:46.428844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.837 BaseBdev3 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.837 [ 00:14:28.837 { 00:14:28.837 "name": "BaseBdev3", 00:14:28.837 "aliases": [ 00:14:28.837 "0f1ad326-5cd3-4718-82bf-0fd8cac2ebc4" 00:14:28.837 ], 00:14:28.837 "product_name": "Malloc disk", 00:14:28.837 "block_size": 512, 00:14:28.837 "num_blocks": 65536, 00:14:28.837 "uuid": "0f1ad326-5cd3-4718-82bf-0fd8cac2ebc4", 00:14:28.837 "assigned_rate_limits": { 00:14:28.837 "rw_ios_per_sec": 0, 00:14:28.837 "rw_mbytes_per_sec": 0, 00:14:28.837 "r_mbytes_per_sec": 0, 00:14:28.837 "w_mbytes_per_sec": 0 00:14:28.837 }, 00:14:28.837 "claimed": true, 00:14:28.837 "claim_type": "exclusive_write", 00:14:28.837 "zoned": false, 00:14:28.837 "supported_io_types": { 00:14:28.837 "read": true, 00:14:28.837 "write": true, 00:14:28.837 "unmap": true, 00:14:28.837 "flush": true, 00:14:28.837 "reset": true, 00:14:28.837 "nvme_admin": false, 00:14:28.837 "nvme_io": false, 00:14:28.837 "nvme_io_md": false, 00:14:28.837 "write_zeroes": true, 00:14:28.837 "zcopy": true, 00:14:28.837 "get_zone_info": false, 00:14:28.837 "zone_management": false, 00:14:28.837 "zone_append": false, 00:14:28.837 "compare": false, 00:14:28.837 "compare_and_write": false, 00:14:28.837 "abort": true, 00:14:28.837 "seek_hole": false, 00:14:28.837 "seek_data": false, 00:14:28.837 "copy": true, 00:14:28.837 "nvme_iov_md": false 00:14:28.837 }, 00:14:28.837 "memory_domains": [ 00:14:28.837 { 00:14:28.837 "dma_device_id": "system", 00:14:28.837 "dma_device_type": 1 00:14:28.837 }, 00:14:28.837 { 00:14:28.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.837 "dma_device_type": 2 00:14:28.837 } 00:14:28.837 ], 00:14:28.837 "driver_specific": {} 00:14:28.837 } 00:14:28.837 ] 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.837 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.111 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.111 "name": "Existed_Raid", 00:14:29.111 "uuid": "e068cb74-94ed-4132-a9f9-3fe0e1b92be5", 00:14:29.111 "strip_size_kb": 64, 00:14:29.111 "state": "configuring", 00:14:29.111 "raid_level": "raid5f", 00:14:29.111 "superblock": true, 00:14:29.111 "num_base_bdevs": 4, 00:14:29.111 "num_base_bdevs_discovered": 3, 00:14:29.111 "num_base_bdevs_operational": 4, 00:14:29.111 "base_bdevs_list": [ 00:14:29.111 { 00:14:29.111 "name": "BaseBdev1", 00:14:29.111 "uuid": "22a37eec-ed50-4a10-b1e9-f77e5564e027", 00:14:29.111 "is_configured": true, 00:14:29.111 "data_offset": 2048, 00:14:29.111 "data_size": 63488 00:14:29.111 }, 00:14:29.111 { 00:14:29.111 "name": "BaseBdev2", 00:14:29.111 "uuid": "0bf61b97-ae71-454a-a8dd-a3e3add53ca9", 00:14:29.111 "is_configured": true, 00:14:29.111 "data_offset": 2048, 00:14:29.111 "data_size": 63488 00:14:29.111 }, 00:14:29.111 { 00:14:29.111 "name": "BaseBdev3", 00:14:29.111 "uuid": "0f1ad326-5cd3-4718-82bf-0fd8cac2ebc4", 00:14:29.111 "is_configured": true, 00:14:29.111 "data_offset": 2048, 00:14:29.111 "data_size": 63488 00:14:29.111 }, 00:14:29.111 { 00:14:29.111 "name": "BaseBdev4", 00:14:29.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.111 "is_configured": false, 00:14:29.111 "data_offset": 0, 00:14:29.111 "data_size": 0 00:14:29.111 } 00:14:29.111 ] 00:14:29.111 }' 00:14:29.111 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.111 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.372 [2024-11-26 12:57:46.938898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:29.372 [2024-11-26 12:57:46.939088] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:29.372 [2024-11-26 12:57:46.939103] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:29.372 [2024-11-26 12:57:46.939371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:29.372 BaseBdev4 00:14:29.372 [2024-11-26 12:57:46.939843] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:29.372 [2024-11-26 12:57:46.939866] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:29.372 [2024-11-26 12:57:46.939975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.372 [ 00:14:29.372 { 00:14:29.372 "name": "BaseBdev4", 00:14:29.372 "aliases": [ 00:14:29.372 "021bc9c1-7cb0-49fd-a813-a0a3e43dd898" 00:14:29.372 ], 00:14:29.372 "product_name": "Malloc disk", 00:14:29.372 "block_size": 512, 00:14:29.372 "num_blocks": 65536, 00:14:29.372 "uuid": "021bc9c1-7cb0-49fd-a813-a0a3e43dd898", 00:14:29.372 "assigned_rate_limits": { 00:14:29.372 "rw_ios_per_sec": 0, 00:14:29.372 "rw_mbytes_per_sec": 0, 00:14:29.372 "r_mbytes_per_sec": 0, 00:14:29.372 "w_mbytes_per_sec": 0 00:14:29.372 }, 00:14:29.372 "claimed": true, 00:14:29.372 "claim_type": "exclusive_write", 00:14:29.372 "zoned": false, 00:14:29.372 "supported_io_types": { 00:14:29.372 "read": true, 00:14:29.372 "write": true, 00:14:29.372 "unmap": true, 00:14:29.372 "flush": true, 00:14:29.372 "reset": true, 00:14:29.372 "nvme_admin": false, 00:14:29.372 "nvme_io": false, 00:14:29.372 "nvme_io_md": false, 00:14:29.372 "write_zeroes": true, 00:14:29.372 "zcopy": true, 00:14:29.372 "get_zone_info": false, 00:14:29.372 "zone_management": false, 00:14:29.372 "zone_append": false, 00:14:29.372 "compare": false, 00:14:29.372 "compare_and_write": false, 00:14:29.372 "abort": true, 00:14:29.372 "seek_hole": false, 00:14:29.372 "seek_data": false, 00:14:29.372 "copy": true, 00:14:29.372 "nvme_iov_md": false 00:14:29.372 }, 00:14:29.372 "memory_domains": [ 00:14:29.372 { 00:14:29.372 "dma_device_id": "system", 00:14:29.372 "dma_device_type": 1 00:14:29.372 }, 00:14:29.372 { 00:14:29.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.372 "dma_device_type": 2 00:14:29.372 } 00:14:29.372 ], 00:14:29.372 "driver_specific": {} 00:14:29.372 } 00:14:29.372 ] 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.372 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.372 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.372 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.372 "name": "Existed_Raid", 00:14:29.372 "uuid": "e068cb74-94ed-4132-a9f9-3fe0e1b92be5", 00:14:29.372 "strip_size_kb": 64, 00:14:29.372 "state": "online", 00:14:29.372 "raid_level": "raid5f", 00:14:29.372 "superblock": true, 00:14:29.372 "num_base_bdevs": 4, 00:14:29.372 "num_base_bdevs_discovered": 4, 00:14:29.372 "num_base_bdevs_operational": 4, 00:14:29.372 "base_bdevs_list": [ 00:14:29.372 { 00:14:29.372 "name": "BaseBdev1", 00:14:29.372 "uuid": "22a37eec-ed50-4a10-b1e9-f77e5564e027", 00:14:29.372 "is_configured": true, 00:14:29.372 "data_offset": 2048, 00:14:29.372 "data_size": 63488 00:14:29.372 }, 00:14:29.372 { 00:14:29.372 "name": "BaseBdev2", 00:14:29.372 "uuid": "0bf61b97-ae71-454a-a8dd-a3e3add53ca9", 00:14:29.372 "is_configured": true, 00:14:29.372 "data_offset": 2048, 00:14:29.372 "data_size": 63488 00:14:29.372 }, 00:14:29.372 { 00:14:29.372 "name": "BaseBdev3", 00:14:29.372 "uuid": "0f1ad326-5cd3-4718-82bf-0fd8cac2ebc4", 00:14:29.372 "is_configured": true, 00:14:29.372 "data_offset": 2048, 00:14:29.372 "data_size": 63488 00:14:29.372 }, 00:14:29.372 { 00:14:29.372 "name": "BaseBdev4", 00:14:29.372 "uuid": "021bc9c1-7cb0-49fd-a813-a0a3e43dd898", 00:14:29.372 "is_configured": true, 00:14:29.372 "data_offset": 2048, 00:14:29.372 "data_size": 63488 00:14:29.372 } 00:14:29.372 ] 00:14:29.372 }' 00:14:29.372 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.372 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.943 [2024-11-26 12:57:47.454310] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.943 "name": "Existed_Raid", 00:14:29.943 "aliases": [ 00:14:29.943 "e068cb74-94ed-4132-a9f9-3fe0e1b92be5" 00:14:29.943 ], 00:14:29.943 "product_name": "Raid Volume", 00:14:29.943 "block_size": 512, 00:14:29.943 "num_blocks": 190464, 00:14:29.943 "uuid": "e068cb74-94ed-4132-a9f9-3fe0e1b92be5", 00:14:29.943 "assigned_rate_limits": { 00:14:29.943 "rw_ios_per_sec": 0, 00:14:29.943 "rw_mbytes_per_sec": 0, 00:14:29.943 "r_mbytes_per_sec": 0, 00:14:29.943 "w_mbytes_per_sec": 0 00:14:29.943 }, 00:14:29.943 "claimed": false, 00:14:29.943 "zoned": false, 00:14:29.943 "supported_io_types": { 00:14:29.943 "read": true, 00:14:29.943 "write": true, 00:14:29.943 "unmap": false, 00:14:29.943 "flush": false, 00:14:29.943 "reset": true, 00:14:29.943 "nvme_admin": false, 00:14:29.943 "nvme_io": false, 00:14:29.943 "nvme_io_md": false, 00:14:29.943 "write_zeroes": true, 00:14:29.943 "zcopy": false, 00:14:29.943 "get_zone_info": false, 00:14:29.943 "zone_management": false, 00:14:29.943 "zone_append": false, 00:14:29.943 "compare": false, 00:14:29.943 "compare_and_write": false, 00:14:29.943 "abort": false, 00:14:29.943 "seek_hole": false, 00:14:29.943 "seek_data": false, 00:14:29.943 "copy": false, 00:14:29.943 "nvme_iov_md": false 00:14:29.943 }, 00:14:29.943 "driver_specific": { 00:14:29.943 "raid": { 00:14:29.943 "uuid": "e068cb74-94ed-4132-a9f9-3fe0e1b92be5", 00:14:29.943 "strip_size_kb": 64, 00:14:29.943 "state": "online", 00:14:29.943 "raid_level": "raid5f", 00:14:29.943 "superblock": true, 00:14:29.943 "num_base_bdevs": 4, 00:14:29.943 "num_base_bdevs_discovered": 4, 00:14:29.943 "num_base_bdevs_operational": 4, 00:14:29.943 "base_bdevs_list": [ 00:14:29.943 { 00:14:29.943 "name": "BaseBdev1", 00:14:29.943 "uuid": "22a37eec-ed50-4a10-b1e9-f77e5564e027", 00:14:29.943 "is_configured": true, 00:14:29.943 "data_offset": 2048, 00:14:29.943 "data_size": 63488 00:14:29.943 }, 00:14:29.943 { 00:14:29.943 "name": "BaseBdev2", 00:14:29.943 "uuid": "0bf61b97-ae71-454a-a8dd-a3e3add53ca9", 00:14:29.943 "is_configured": true, 00:14:29.943 "data_offset": 2048, 00:14:29.943 "data_size": 63488 00:14:29.943 }, 00:14:29.943 { 00:14:29.943 "name": "BaseBdev3", 00:14:29.943 "uuid": "0f1ad326-5cd3-4718-82bf-0fd8cac2ebc4", 00:14:29.943 "is_configured": true, 00:14:29.943 "data_offset": 2048, 00:14:29.943 "data_size": 63488 00:14:29.943 }, 00:14:29.943 { 00:14:29.943 "name": "BaseBdev4", 00:14:29.943 "uuid": "021bc9c1-7cb0-49fd-a813-a0a3e43dd898", 00:14:29.943 "is_configured": true, 00:14:29.943 "data_offset": 2048, 00:14:29.943 "data_size": 63488 00:14:29.943 } 00:14:29.943 ] 00:14:29.943 } 00:14:29.943 } 00:14:29.943 }' 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:29.943 BaseBdev2 00:14:29.943 BaseBdev3 00:14:29.943 BaseBdev4' 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.943 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.203 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.203 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.204 [2024-11-26 12:57:47.785572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.204 "name": "Existed_Raid", 00:14:30.204 "uuid": "e068cb74-94ed-4132-a9f9-3fe0e1b92be5", 00:14:30.204 "strip_size_kb": 64, 00:14:30.204 "state": "online", 00:14:30.204 "raid_level": "raid5f", 00:14:30.204 "superblock": true, 00:14:30.204 "num_base_bdevs": 4, 00:14:30.204 "num_base_bdevs_discovered": 3, 00:14:30.204 "num_base_bdevs_operational": 3, 00:14:30.204 "base_bdevs_list": [ 00:14:30.204 { 00:14:30.204 "name": null, 00:14:30.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.204 "is_configured": false, 00:14:30.204 "data_offset": 0, 00:14:30.204 "data_size": 63488 00:14:30.204 }, 00:14:30.204 { 00:14:30.204 "name": "BaseBdev2", 00:14:30.204 "uuid": "0bf61b97-ae71-454a-a8dd-a3e3add53ca9", 00:14:30.204 "is_configured": true, 00:14:30.204 "data_offset": 2048, 00:14:30.204 "data_size": 63488 00:14:30.204 }, 00:14:30.204 { 00:14:30.204 "name": "BaseBdev3", 00:14:30.204 "uuid": "0f1ad326-5cd3-4718-82bf-0fd8cac2ebc4", 00:14:30.204 "is_configured": true, 00:14:30.204 "data_offset": 2048, 00:14:30.204 "data_size": 63488 00:14:30.204 }, 00:14:30.204 { 00:14:30.204 "name": "BaseBdev4", 00:14:30.204 "uuid": "021bc9c1-7cb0-49fd-a813-a0a3e43dd898", 00:14:30.204 "is_configured": true, 00:14:30.204 "data_offset": 2048, 00:14:30.204 "data_size": 63488 00:14:30.204 } 00:14:30.204 ] 00:14:30.204 }' 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.204 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.774 [2024-11-26 12:57:48.252085] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:30.774 [2024-11-26 12:57:48.252288] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.774 [2024-11-26 12:57:48.263147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.774 [2024-11-26 12:57:48.319071] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.774 [2024-11-26 12:57:48.385702] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:30.774 [2024-11-26 12:57:48.385746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:30.774 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.035 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:31.035 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.035 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.035 BaseBdev2 00:14:31.035 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.035 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:31.035 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:31.035 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:31.035 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:31.035 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:31.035 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.036 [ 00:14:31.036 { 00:14:31.036 "name": "BaseBdev2", 00:14:31.036 "aliases": [ 00:14:31.036 "c08f0071-619f-4f34-bdfe-f446dc9b59f3" 00:14:31.036 ], 00:14:31.036 "product_name": "Malloc disk", 00:14:31.036 "block_size": 512, 00:14:31.036 "num_blocks": 65536, 00:14:31.036 "uuid": "c08f0071-619f-4f34-bdfe-f446dc9b59f3", 00:14:31.036 "assigned_rate_limits": { 00:14:31.036 "rw_ios_per_sec": 0, 00:14:31.036 "rw_mbytes_per_sec": 0, 00:14:31.036 "r_mbytes_per_sec": 0, 00:14:31.036 "w_mbytes_per_sec": 0 00:14:31.036 }, 00:14:31.036 "claimed": false, 00:14:31.036 "zoned": false, 00:14:31.036 "supported_io_types": { 00:14:31.036 "read": true, 00:14:31.036 "write": true, 00:14:31.036 "unmap": true, 00:14:31.036 "flush": true, 00:14:31.036 "reset": true, 00:14:31.036 "nvme_admin": false, 00:14:31.036 "nvme_io": false, 00:14:31.036 "nvme_io_md": false, 00:14:31.036 "write_zeroes": true, 00:14:31.036 "zcopy": true, 00:14:31.036 "get_zone_info": false, 00:14:31.036 "zone_management": false, 00:14:31.036 "zone_append": false, 00:14:31.036 "compare": false, 00:14:31.036 "compare_and_write": false, 00:14:31.036 "abort": true, 00:14:31.036 "seek_hole": false, 00:14:31.036 "seek_data": false, 00:14:31.036 "copy": true, 00:14:31.036 "nvme_iov_md": false 00:14:31.036 }, 00:14:31.036 "memory_domains": [ 00:14:31.036 { 00:14:31.036 "dma_device_id": "system", 00:14:31.036 "dma_device_type": 1 00:14:31.036 }, 00:14:31.036 { 00:14:31.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.036 "dma_device_type": 2 00:14:31.036 } 00:14:31.036 ], 00:14:31.036 "driver_specific": {} 00:14:31.036 } 00:14:31.036 ] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.036 BaseBdev3 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.036 [ 00:14:31.036 { 00:14:31.036 "name": "BaseBdev3", 00:14:31.036 "aliases": [ 00:14:31.036 "2909d78b-184d-4969-96f4-0ade26d0b8f2" 00:14:31.036 ], 00:14:31.036 "product_name": "Malloc disk", 00:14:31.036 "block_size": 512, 00:14:31.036 "num_blocks": 65536, 00:14:31.036 "uuid": "2909d78b-184d-4969-96f4-0ade26d0b8f2", 00:14:31.036 "assigned_rate_limits": { 00:14:31.036 "rw_ios_per_sec": 0, 00:14:31.036 "rw_mbytes_per_sec": 0, 00:14:31.036 "r_mbytes_per_sec": 0, 00:14:31.036 "w_mbytes_per_sec": 0 00:14:31.036 }, 00:14:31.036 "claimed": false, 00:14:31.036 "zoned": false, 00:14:31.036 "supported_io_types": { 00:14:31.036 "read": true, 00:14:31.036 "write": true, 00:14:31.036 "unmap": true, 00:14:31.036 "flush": true, 00:14:31.036 "reset": true, 00:14:31.036 "nvme_admin": false, 00:14:31.036 "nvme_io": false, 00:14:31.036 "nvme_io_md": false, 00:14:31.036 "write_zeroes": true, 00:14:31.036 "zcopy": true, 00:14:31.036 "get_zone_info": false, 00:14:31.036 "zone_management": false, 00:14:31.036 "zone_append": false, 00:14:31.036 "compare": false, 00:14:31.036 "compare_and_write": false, 00:14:31.036 "abort": true, 00:14:31.036 "seek_hole": false, 00:14:31.036 "seek_data": false, 00:14:31.036 "copy": true, 00:14:31.036 "nvme_iov_md": false 00:14:31.036 }, 00:14:31.036 "memory_domains": [ 00:14:31.036 { 00:14:31.036 "dma_device_id": "system", 00:14:31.036 "dma_device_type": 1 00:14:31.036 }, 00:14:31.036 { 00:14:31.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.036 "dma_device_type": 2 00:14:31.036 } 00:14:31.036 ], 00:14:31.036 "driver_specific": {} 00:14:31.036 } 00:14:31.036 ] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.036 BaseBdev4 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.036 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.036 [ 00:14:31.036 { 00:14:31.036 "name": "BaseBdev4", 00:14:31.036 "aliases": [ 00:14:31.036 "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f" 00:14:31.036 ], 00:14:31.036 "product_name": "Malloc disk", 00:14:31.036 "block_size": 512, 00:14:31.036 "num_blocks": 65536, 00:14:31.036 "uuid": "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f", 00:14:31.036 "assigned_rate_limits": { 00:14:31.036 "rw_ios_per_sec": 0, 00:14:31.036 "rw_mbytes_per_sec": 0, 00:14:31.036 "r_mbytes_per_sec": 0, 00:14:31.036 "w_mbytes_per_sec": 0 00:14:31.036 }, 00:14:31.036 "claimed": false, 00:14:31.036 "zoned": false, 00:14:31.036 "supported_io_types": { 00:14:31.036 "read": true, 00:14:31.036 "write": true, 00:14:31.036 "unmap": true, 00:14:31.036 "flush": true, 00:14:31.036 "reset": true, 00:14:31.036 "nvme_admin": false, 00:14:31.036 "nvme_io": false, 00:14:31.036 "nvme_io_md": false, 00:14:31.036 "write_zeroes": true, 00:14:31.036 "zcopy": true, 00:14:31.036 "get_zone_info": false, 00:14:31.036 "zone_management": false, 00:14:31.036 "zone_append": false, 00:14:31.036 "compare": false, 00:14:31.036 "compare_and_write": false, 00:14:31.036 "abort": true, 00:14:31.036 "seek_hole": false, 00:14:31.036 "seek_data": false, 00:14:31.036 "copy": true, 00:14:31.036 "nvme_iov_md": false 00:14:31.036 }, 00:14:31.036 "memory_domains": [ 00:14:31.036 { 00:14:31.036 "dma_device_id": "system", 00:14:31.036 "dma_device_type": 1 00:14:31.036 }, 00:14:31.036 { 00:14:31.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.036 "dma_device_type": 2 00:14:31.036 } 00:14:31.036 ], 00:14:31.036 "driver_specific": {} 00:14:31.037 } 00:14:31.037 ] 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.037 [2024-11-26 12:57:48.612641] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.037 [2024-11-26 12:57:48.612767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.037 [2024-11-26 12:57:48.612810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.037 [2024-11-26 12:57:48.614573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.037 [2024-11-26 12:57:48.614660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.037 "name": "Existed_Raid", 00:14:31.037 "uuid": "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8", 00:14:31.037 "strip_size_kb": 64, 00:14:31.037 "state": "configuring", 00:14:31.037 "raid_level": "raid5f", 00:14:31.037 "superblock": true, 00:14:31.037 "num_base_bdevs": 4, 00:14:31.037 "num_base_bdevs_discovered": 3, 00:14:31.037 "num_base_bdevs_operational": 4, 00:14:31.037 "base_bdevs_list": [ 00:14:31.037 { 00:14:31.037 "name": "BaseBdev1", 00:14:31.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.037 "is_configured": false, 00:14:31.037 "data_offset": 0, 00:14:31.037 "data_size": 0 00:14:31.037 }, 00:14:31.037 { 00:14:31.037 "name": "BaseBdev2", 00:14:31.037 "uuid": "c08f0071-619f-4f34-bdfe-f446dc9b59f3", 00:14:31.037 "is_configured": true, 00:14:31.037 "data_offset": 2048, 00:14:31.037 "data_size": 63488 00:14:31.037 }, 00:14:31.037 { 00:14:31.037 "name": "BaseBdev3", 00:14:31.037 "uuid": "2909d78b-184d-4969-96f4-0ade26d0b8f2", 00:14:31.037 "is_configured": true, 00:14:31.037 "data_offset": 2048, 00:14:31.037 "data_size": 63488 00:14:31.037 }, 00:14:31.037 { 00:14:31.037 "name": "BaseBdev4", 00:14:31.037 "uuid": "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f", 00:14:31.037 "is_configured": true, 00:14:31.037 "data_offset": 2048, 00:14:31.037 "data_size": 63488 00:14:31.037 } 00:14:31.037 ] 00:14:31.037 }' 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.037 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.606 [2024-11-26 12:57:49.047917] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.606 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.607 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.607 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.607 "name": "Existed_Raid", 00:14:31.607 "uuid": "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8", 00:14:31.607 "strip_size_kb": 64, 00:14:31.607 "state": "configuring", 00:14:31.607 "raid_level": "raid5f", 00:14:31.607 "superblock": true, 00:14:31.607 "num_base_bdevs": 4, 00:14:31.607 "num_base_bdevs_discovered": 2, 00:14:31.607 "num_base_bdevs_operational": 4, 00:14:31.607 "base_bdevs_list": [ 00:14:31.607 { 00:14:31.607 "name": "BaseBdev1", 00:14:31.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.607 "is_configured": false, 00:14:31.607 "data_offset": 0, 00:14:31.607 "data_size": 0 00:14:31.607 }, 00:14:31.607 { 00:14:31.607 "name": null, 00:14:31.607 "uuid": "c08f0071-619f-4f34-bdfe-f446dc9b59f3", 00:14:31.607 "is_configured": false, 00:14:31.607 "data_offset": 0, 00:14:31.607 "data_size": 63488 00:14:31.607 }, 00:14:31.607 { 00:14:31.607 "name": "BaseBdev3", 00:14:31.607 "uuid": "2909d78b-184d-4969-96f4-0ade26d0b8f2", 00:14:31.607 "is_configured": true, 00:14:31.607 "data_offset": 2048, 00:14:31.607 "data_size": 63488 00:14:31.607 }, 00:14:31.607 { 00:14:31.607 "name": "BaseBdev4", 00:14:31.607 "uuid": "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f", 00:14:31.607 "is_configured": true, 00:14:31.607 "data_offset": 2048, 00:14:31.607 "data_size": 63488 00:14:31.607 } 00:14:31.607 ] 00:14:31.607 }' 00:14:31.607 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.607 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.868 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.868 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.868 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.868 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.129 [2024-11-26 12:57:49.598066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.129 BaseBdev1 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.129 [ 00:14:32.129 { 00:14:32.129 "name": "BaseBdev1", 00:14:32.129 "aliases": [ 00:14:32.129 "e222630a-5949-4f1b-ae19-88e7d5fbd7e9" 00:14:32.129 ], 00:14:32.129 "product_name": "Malloc disk", 00:14:32.129 "block_size": 512, 00:14:32.129 "num_blocks": 65536, 00:14:32.129 "uuid": "e222630a-5949-4f1b-ae19-88e7d5fbd7e9", 00:14:32.129 "assigned_rate_limits": { 00:14:32.129 "rw_ios_per_sec": 0, 00:14:32.129 "rw_mbytes_per_sec": 0, 00:14:32.129 "r_mbytes_per_sec": 0, 00:14:32.129 "w_mbytes_per_sec": 0 00:14:32.129 }, 00:14:32.129 "claimed": true, 00:14:32.129 "claim_type": "exclusive_write", 00:14:32.129 "zoned": false, 00:14:32.129 "supported_io_types": { 00:14:32.129 "read": true, 00:14:32.129 "write": true, 00:14:32.129 "unmap": true, 00:14:32.129 "flush": true, 00:14:32.129 "reset": true, 00:14:32.129 "nvme_admin": false, 00:14:32.129 "nvme_io": false, 00:14:32.129 "nvme_io_md": false, 00:14:32.129 "write_zeroes": true, 00:14:32.129 "zcopy": true, 00:14:32.129 "get_zone_info": false, 00:14:32.129 "zone_management": false, 00:14:32.129 "zone_append": false, 00:14:32.129 "compare": false, 00:14:32.129 "compare_and_write": false, 00:14:32.129 "abort": true, 00:14:32.129 "seek_hole": false, 00:14:32.129 "seek_data": false, 00:14:32.129 "copy": true, 00:14:32.129 "nvme_iov_md": false 00:14:32.129 }, 00:14:32.129 "memory_domains": [ 00:14:32.129 { 00:14:32.129 "dma_device_id": "system", 00:14:32.129 "dma_device_type": 1 00:14:32.129 }, 00:14:32.129 { 00:14:32.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.129 "dma_device_type": 2 00:14:32.129 } 00:14:32.129 ], 00:14:32.129 "driver_specific": {} 00:14:32.129 } 00:14:32.129 ] 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.129 "name": "Existed_Raid", 00:14:32.129 "uuid": "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8", 00:14:32.129 "strip_size_kb": 64, 00:14:32.129 "state": "configuring", 00:14:32.129 "raid_level": "raid5f", 00:14:32.129 "superblock": true, 00:14:32.129 "num_base_bdevs": 4, 00:14:32.129 "num_base_bdevs_discovered": 3, 00:14:32.129 "num_base_bdevs_operational": 4, 00:14:32.129 "base_bdevs_list": [ 00:14:32.129 { 00:14:32.129 "name": "BaseBdev1", 00:14:32.129 "uuid": "e222630a-5949-4f1b-ae19-88e7d5fbd7e9", 00:14:32.129 "is_configured": true, 00:14:32.129 "data_offset": 2048, 00:14:32.129 "data_size": 63488 00:14:32.129 }, 00:14:32.129 { 00:14:32.129 "name": null, 00:14:32.129 "uuid": "c08f0071-619f-4f34-bdfe-f446dc9b59f3", 00:14:32.129 "is_configured": false, 00:14:32.129 "data_offset": 0, 00:14:32.129 "data_size": 63488 00:14:32.129 }, 00:14:32.129 { 00:14:32.129 "name": "BaseBdev3", 00:14:32.129 "uuid": "2909d78b-184d-4969-96f4-0ade26d0b8f2", 00:14:32.129 "is_configured": true, 00:14:32.129 "data_offset": 2048, 00:14:32.129 "data_size": 63488 00:14:32.129 }, 00:14:32.129 { 00:14:32.129 "name": "BaseBdev4", 00:14:32.129 "uuid": "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f", 00:14:32.129 "is_configured": true, 00:14:32.129 "data_offset": 2048, 00:14:32.129 "data_size": 63488 00:14:32.129 } 00:14:32.129 ] 00:14:32.129 }' 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.129 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.696 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 [2024-11-26 12:57:50.185087] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.697 "name": "Existed_Raid", 00:14:32.697 "uuid": "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8", 00:14:32.697 "strip_size_kb": 64, 00:14:32.697 "state": "configuring", 00:14:32.697 "raid_level": "raid5f", 00:14:32.697 "superblock": true, 00:14:32.697 "num_base_bdevs": 4, 00:14:32.697 "num_base_bdevs_discovered": 2, 00:14:32.697 "num_base_bdevs_operational": 4, 00:14:32.697 "base_bdevs_list": [ 00:14:32.697 { 00:14:32.697 "name": "BaseBdev1", 00:14:32.697 "uuid": "e222630a-5949-4f1b-ae19-88e7d5fbd7e9", 00:14:32.697 "is_configured": true, 00:14:32.697 "data_offset": 2048, 00:14:32.697 "data_size": 63488 00:14:32.697 }, 00:14:32.697 { 00:14:32.697 "name": null, 00:14:32.697 "uuid": "c08f0071-619f-4f34-bdfe-f446dc9b59f3", 00:14:32.697 "is_configured": false, 00:14:32.697 "data_offset": 0, 00:14:32.697 "data_size": 63488 00:14:32.697 }, 00:14:32.697 { 00:14:32.697 "name": null, 00:14:32.697 "uuid": "2909d78b-184d-4969-96f4-0ade26d0b8f2", 00:14:32.697 "is_configured": false, 00:14:32.697 "data_offset": 0, 00:14:32.697 "data_size": 63488 00:14:32.697 }, 00:14:32.697 { 00:14:32.697 "name": "BaseBdev4", 00:14:32.697 "uuid": "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f", 00:14:32.697 "is_configured": true, 00:14:32.697 "data_offset": 2048, 00:14:32.697 "data_size": 63488 00:14:32.697 } 00:14:32.697 ] 00:14:32.697 }' 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.697 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.955 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:32.955 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.955 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.955 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.955 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.955 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:32.955 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:32.955 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.955 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.955 [2024-11-26 12:57:50.628398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.214 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.214 "name": "Existed_Raid", 00:14:33.214 "uuid": "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8", 00:14:33.214 "strip_size_kb": 64, 00:14:33.214 "state": "configuring", 00:14:33.214 "raid_level": "raid5f", 00:14:33.214 "superblock": true, 00:14:33.214 "num_base_bdevs": 4, 00:14:33.214 "num_base_bdevs_discovered": 3, 00:14:33.214 "num_base_bdevs_operational": 4, 00:14:33.214 "base_bdevs_list": [ 00:14:33.214 { 00:14:33.214 "name": "BaseBdev1", 00:14:33.214 "uuid": "e222630a-5949-4f1b-ae19-88e7d5fbd7e9", 00:14:33.214 "is_configured": true, 00:14:33.214 "data_offset": 2048, 00:14:33.214 "data_size": 63488 00:14:33.214 }, 00:14:33.214 { 00:14:33.214 "name": null, 00:14:33.214 "uuid": "c08f0071-619f-4f34-bdfe-f446dc9b59f3", 00:14:33.214 "is_configured": false, 00:14:33.214 "data_offset": 0, 00:14:33.214 "data_size": 63488 00:14:33.214 }, 00:14:33.214 { 00:14:33.214 "name": "BaseBdev3", 00:14:33.214 "uuid": "2909d78b-184d-4969-96f4-0ade26d0b8f2", 00:14:33.214 "is_configured": true, 00:14:33.214 "data_offset": 2048, 00:14:33.214 "data_size": 63488 00:14:33.214 }, 00:14:33.214 { 00:14:33.214 "name": "BaseBdev4", 00:14:33.214 "uuid": "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f", 00:14:33.214 "is_configured": true, 00:14:33.214 "data_offset": 2048, 00:14:33.214 "data_size": 63488 00:14:33.215 } 00:14:33.215 ] 00:14:33.215 }' 00:14:33.215 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.215 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.474 [2024-11-26 12:57:51.131763] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.474 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.475 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.734 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.734 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.734 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.734 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.734 "name": "Existed_Raid", 00:14:33.734 "uuid": "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8", 00:14:33.734 "strip_size_kb": 64, 00:14:33.734 "state": "configuring", 00:14:33.734 "raid_level": "raid5f", 00:14:33.734 "superblock": true, 00:14:33.734 "num_base_bdevs": 4, 00:14:33.734 "num_base_bdevs_discovered": 2, 00:14:33.734 "num_base_bdevs_operational": 4, 00:14:33.734 "base_bdevs_list": [ 00:14:33.734 { 00:14:33.734 "name": null, 00:14:33.734 "uuid": "e222630a-5949-4f1b-ae19-88e7d5fbd7e9", 00:14:33.734 "is_configured": false, 00:14:33.734 "data_offset": 0, 00:14:33.734 "data_size": 63488 00:14:33.734 }, 00:14:33.734 { 00:14:33.734 "name": null, 00:14:33.734 "uuid": "c08f0071-619f-4f34-bdfe-f446dc9b59f3", 00:14:33.734 "is_configured": false, 00:14:33.734 "data_offset": 0, 00:14:33.734 "data_size": 63488 00:14:33.734 }, 00:14:33.734 { 00:14:33.734 "name": "BaseBdev3", 00:14:33.734 "uuid": "2909d78b-184d-4969-96f4-0ade26d0b8f2", 00:14:33.734 "is_configured": true, 00:14:33.734 "data_offset": 2048, 00:14:33.734 "data_size": 63488 00:14:33.734 }, 00:14:33.734 { 00:14:33.734 "name": "BaseBdev4", 00:14:33.734 "uuid": "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 } 00:14:33.735 ] 00:14:33.735 }' 00:14:33.735 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.735 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.994 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.994 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:33.994 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.994 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.994 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.994 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 [2024-11-26 12:57:51.653374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.995 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.254 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.254 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.254 "name": "Existed_Raid", 00:14:34.254 "uuid": "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8", 00:14:34.254 "strip_size_kb": 64, 00:14:34.254 "state": "configuring", 00:14:34.254 "raid_level": "raid5f", 00:14:34.254 "superblock": true, 00:14:34.254 "num_base_bdevs": 4, 00:14:34.254 "num_base_bdevs_discovered": 3, 00:14:34.254 "num_base_bdevs_operational": 4, 00:14:34.254 "base_bdevs_list": [ 00:14:34.254 { 00:14:34.254 "name": null, 00:14:34.254 "uuid": "e222630a-5949-4f1b-ae19-88e7d5fbd7e9", 00:14:34.254 "is_configured": false, 00:14:34.254 "data_offset": 0, 00:14:34.254 "data_size": 63488 00:14:34.254 }, 00:14:34.254 { 00:14:34.254 "name": "BaseBdev2", 00:14:34.254 "uuid": "c08f0071-619f-4f34-bdfe-f446dc9b59f3", 00:14:34.254 "is_configured": true, 00:14:34.254 "data_offset": 2048, 00:14:34.254 "data_size": 63488 00:14:34.254 }, 00:14:34.254 { 00:14:34.254 "name": "BaseBdev3", 00:14:34.254 "uuid": "2909d78b-184d-4969-96f4-0ade26d0b8f2", 00:14:34.254 "is_configured": true, 00:14:34.254 "data_offset": 2048, 00:14:34.254 "data_size": 63488 00:14:34.254 }, 00:14:34.254 { 00:14:34.254 "name": "BaseBdev4", 00:14:34.254 "uuid": "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f", 00:14:34.254 "is_configured": true, 00:14:34.254 "data_offset": 2048, 00:14:34.254 "data_size": 63488 00:14:34.254 } 00:14:34.254 ] 00:14:34.254 }' 00:14:34.254 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.254 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.513 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:34.513 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.513 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.513 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.513 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.513 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:34.513 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:34.513 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.513 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.513 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e222630a-5949-4f1b-ae19-88e7d5fbd7e9 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.774 [2024-11-26 12:57:52.242642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:34.774 [2024-11-26 12:57:52.242884] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:34.774 [2024-11-26 12:57:52.242919] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:34.774 [2024-11-26 12:57:52.243195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:34.774 NewBaseBdev 00:14:34.774 [2024-11-26 12:57:52.243654] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:34.774 [2024-11-26 12:57:52.243748] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:34.774 [2024-11-26 12:57:52.243889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.774 [ 00:14:34.774 { 00:14:34.774 "name": "NewBaseBdev", 00:14:34.774 "aliases": [ 00:14:34.774 "e222630a-5949-4f1b-ae19-88e7d5fbd7e9" 00:14:34.774 ], 00:14:34.774 "product_name": "Malloc disk", 00:14:34.774 "block_size": 512, 00:14:34.774 "num_blocks": 65536, 00:14:34.774 "uuid": "e222630a-5949-4f1b-ae19-88e7d5fbd7e9", 00:14:34.774 "assigned_rate_limits": { 00:14:34.774 "rw_ios_per_sec": 0, 00:14:34.774 "rw_mbytes_per_sec": 0, 00:14:34.774 "r_mbytes_per_sec": 0, 00:14:34.774 "w_mbytes_per_sec": 0 00:14:34.774 }, 00:14:34.774 "claimed": true, 00:14:34.774 "claim_type": "exclusive_write", 00:14:34.774 "zoned": false, 00:14:34.774 "supported_io_types": { 00:14:34.774 "read": true, 00:14:34.774 "write": true, 00:14:34.774 "unmap": true, 00:14:34.774 "flush": true, 00:14:34.774 "reset": true, 00:14:34.774 "nvme_admin": false, 00:14:34.774 "nvme_io": false, 00:14:34.774 "nvme_io_md": false, 00:14:34.774 "write_zeroes": true, 00:14:34.774 "zcopy": true, 00:14:34.774 "get_zone_info": false, 00:14:34.774 "zone_management": false, 00:14:34.774 "zone_append": false, 00:14:34.774 "compare": false, 00:14:34.774 "compare_and_write": false, 00:14:34.774 "abort": true, 00:14:34.774 "seek_hole": false, 00:14:34.774 "seek_data": false, 00:14:34.774 "copy": true, 00:14:34.774 "nvme_iov_md": false 00:14:34.774 }, 00:14:34.774 "memory_domains": [ 00:14:34.774 { 00:14:34.774 "dma_device_id": "system", 00:14:34.774 "dma_device_type": 1 00:14:34.774 }, 00:14:34.774 { 00:14:34.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.774 "dma_device_type": 2 00:14:34.774 } 00:14:34.774 ], 00:14:34.774 "driver_specific": {} 00:14:34.774 } 00:14:34.774 ] 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.774 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.775 "name": "Existed_Raid", 00:14:34.775 "uuid": "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8", 00:14:34.775 "strip_size_kb": 64, 00:14:34.775 "state": "online", 00:14:34.775 "raid_level": "raid5f", 00:14:34.775 "superblock": true, 00:14:34.775 "num_base_bdevs": 4, 00:14:34.775 "num_base_bdevs_discovered": 4, 00:14:34.775 "num_base_bdevs_operational": 4, 00:14:34.775 "base_bdevs_list": [ 00:14:34.775 { 00:14:34.775 "name": "NewBaseBdev", 00:14:34.775 "uuid": "e222630a-5949-4f1b-ae19-88e7d5fbd7e9", 00:14:34.775 "is_configured": true, 00:14:34.775 "data_offset": 2048, 00:14:34.775 "data_size": 63488 00:14:34.775 }, 00:14:34.775 { 00:14:34.775 "name": "BaseBdev2", 00:14:34.775 "uuid": "c08f0071-619f-4f34-bdfe-f446dc9b59f3", 00:14:34.775 "is_configured": true, 00:14:34.775 "data_offset": 2048, 00:14:34.775 "data_size": 63488 00:14:34.775 }, 00:14:34.775 { 00:14:34.775 "name": "BaseBdev3", 00:14:34.775 "uuid": "2909d78b-184d-4969-96f4-0ade26d0b8f2", 00:14:34.775 "is_configured": true, 00:14:34.775 "data_offset": 2048, 00:14:34.775 "data_size": 63488 00:14:34.775 }, 00:14:34.775 { 00:14:34.775 "name": "BaseBdev4", 00:14:34.775 "uuid": "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f", 00:14:34.775 "is_configured": true, 00:14:34.775 "data_offset": 2048, 00:14:34.775 "data_size": 63488 00:14:34.775 } 00:14:34.775 ] 00:14:34.775 }' 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.775 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.035 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:35.035 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:35.035 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:35.035 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:35.035 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:35.035 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:35.035 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:35.035 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:35.035 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.035 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.035 [2024-11-26 12:57:52.702072] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:35.296 "name": "Existed_Raid", 00:14:35.296 "aliases": [ 00:14:35.296 "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8" 00:14:35.296 ], 00:14:35.296 "product_name": "Raid Volume", 00:14:35.296 "block_size": 512, 00:14:35.296 "num_blocks": 190464, 00:14:35.296 "uuid": "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8", 00:14:35.296 "assigned_rate_limits": { 00:14:35.296 "rw_ios_per_sec": 0, 00:14:35.296 "rw_mbytes_per_sec": 0, 00:14:35.296 "r_mbytes_per_sec": 0, 00:14:35.296 "w_mbytes_per_sec": 0 00:14:35.296 }, 00:14:35.296 "claimed": false, 00:14:35.296 "zoned": false, 00:14:35.296 "supported_io_types": { 00:14:35.296 "read": true, 00:14:35.296 "write": true, 00:14:35.296 "unmap": false, 00:14:35.296 "flush": false, 00:14:35.296 "reset": true, 00:14:35.296 "nvme_admin": false, 00:14:35.296 "nvme_io": false, 00:14:35.296 "nvme_io_md": false, 00:14:35.296 "write_zeroes": true, 00:14:35.296 "zcopy": false, 00:14:35.296 "get_zone_info": false, 00:14:35.296 "zone_management": false, 00:14:35.296 "zone_append": false, 00:14:35.296 "compare": false, 00:14:35.296 "compare_and_write": false, 00:14:35.296 "abort": false, 00:14:35.296 "seek_hole": false, 00:14:35.296 "seek_data": false, 00:14:35.296 "copy": false, 00:14:35.296 "nvme_iov_md": false 00:14:35.296 }, 00:14:35.296 "driver_specific": { 00:14:35.296 "raid": { 00:14:35.296 "uuid": "0b88a271-25ed-4d2d-a54b-3e1ab503b5b8", 00:14:35.296 "strip_size_kb": 64, 00:14:35.296 "state": "online", 00:14:35.296 "raid_level": "raid5f", 00:14:35.296 "superblock": true, 00:14:35.296 "num_base_bdevs": 4, 00:14:35.296 "num_base_bdevs_discovered": 4, 00:14:35.296 "num_base_bdevs_operational": 4, 00:14:35.296 "base_bdevs_list": [ 00:14:35.296 { 00:14:35.296 "name": "NewBaseBdev", 00:14:35.296 "uuid": "e222630a-5949-4f1b-ae19-88e7d5fbd7e9", 00:14:35.296 "is_configured": true, 00:14:35.296 "data_offset": 2048, 00:14:35.296 "data_size": 63488 00:14:35.296 }, 00:14:35.296 { 00:14:35.296 "name": "BaseBdev2", 00:14:35.296 "uuid": "c08f0071-619f-4f34-bdfe-f446dc9b59f3", 00:14:35.296 "is_configured": true, 00:14:35.296 "data_offset": 2048, 00:14:35.296 "data_size": 63488 00:14:35.296 }, 00:14:35.296 { 00:14:35.296 "name": "BaseBdev3", 00:14:35.296 "uuid": "2909d78b-184d-4969-96f4-0ade26d0b8f2", 00:14:35.296 "is_configured": true, 00:14:35.296 "data_offset": 2048, 00:14:35.296 "data_size": 63488 00:14:35.296 }, 00:14:35.296 { 00:14:35.296 "name": "BaseBdev4", 00:14:35.296 "uuid": "d48f0ff3-76dd-4ed6-94cb-d0b06d8f265f", 00:14:35.296 "is_configured": true, 00:14:35.296 "data_offset": 2048, 00:14:35.296 "data_size": 63488 00:14:35.296 } 00:14:35.296 ] 00:14:35.296 } 00:14:35.296 } 00:14:35.296 }' 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:35.296 BaseBdev2 00:14:35.296 BaseBdev3 00:14:35.296 BaseBdev4' 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.296 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.557 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.557 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.557 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.557 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:35.557 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.557 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.557 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.557 [2024-11-26 12:57:53.053284] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.557 [2024-11-26 12:57:53.053309] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.557 [2024-11-26 12:57:53.053371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.557 [2024-11-26 12:57:53.053620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.557 [2024-11-26 12:57:53.053647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94134 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94134 ']' 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 94134 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94134 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94134' 00:14:35.557 killing process with pid 94134 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 94134 00:14:35.557 [2024-11-26 12:57:53.103937] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:35.557 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 94134 00:14:35.557 [2024-11-26 12:57:53.145367] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.818 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:35.818 00:14:35.818 real 0m9.888s 00:14:35.818 user 0m16.866s 00:14:35.818 sys 0m2.164s 00:14:35.818 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:35.818 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.818 ************************************ 00:14:35.818 END TEST raid5f_state_function_test_sb 00:14:35.818 ************************************ 00:14:35.818 12:57:53 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:35.818 12:57:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:35.818 12:57:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:35.818 12:57:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.818 ************************************ 00:14:35.818 START TEST raid5f_superblock_test 00:14:35.818 ************************************ 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94782 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94782 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94782 ']' 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.818 12:57:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.078 [2024-11-26 12:57:53.572456] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:36.078 [2024-11-26 12:57:53.572620] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94782 ] 00:14:36.078 [2024-11-26 12:57:53.738179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.338 [2024-11-26 12:57:53.785147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.338 [2024-11-26 12:57:53.827715] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.338 [2024-11-26 12:57:53.827860] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.909 malloc1 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.909 [2024-11-26 12:57:54.406319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:36.909 [2024-11-26 12:57:54.406474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.909 [2024-11-26 12:57:54.406516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.909 [2024-11-26 12:57:54.406574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.909 [2024-11-26 12:57:54.408631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.909 [2024-11-26 12:57:54.408706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:36.909 pt1 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.909 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.909 malloc2 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.910 [2024-11-26 12:57:54.447245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.910 [2024-11-26 12:57:54.447366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.910 [2024-11-26 12:57:54.447406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:36.910 [2024-11-26 12:57:54.447442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.910 [2024-11-26 12:57:54.449641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.910 [2024-11-26 12:57:54.449677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.910 pt2 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.910 malloc3 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.910 [2024-11-26 12:57:54.475716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:36.910 [2024-11-26 12:57:54.475823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.910 [2024-11-26 12:57:54.475856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:36.910 [2024-11-26 12:57:54.475884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.910 [2024-11-26 12:57:54.477886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.910 [2024-11-26 12:57:54.477958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:36.910 pt3 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.910 malloc4 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.910 [2024-11-26 12:57:54.508142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:36.910 [2024-11-26 12:57:54.508253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.910 [2024-11-26 12:57:54.508285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:36.910 [2024-11-26 12:57:54.508315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.910 [2024-11-26 12:57:54.510335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.910 [2024-11-26 12:57:54.510421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:36.910 pt4 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.910 [2024-11-26 12:57:54.520223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:36.910 [2024-11-26 12:57:54.522055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:36.910 [2024-11-26 12:57:54.522166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:36.910 [2024-11-26 12:57:54.522275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:36.910 [2024-11-26 12:57:54.522471] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:36.910 [2024-11-26 12:57:54.522521] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:36.910 [2024-11-26 12:57:54.522801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:36.910 [2024-11-26 12:57:54.523304] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:36.910 [2024-11-26 12:57:54.523357] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:36.910 [2024-11-26 12:57:54.523519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.910 "name": "raid_bdev1", 00:14:36.910 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:36.910 "strip_size_kb": 64, 00:14:36.910 "state": "online", 00:14:36.910 "raid_level": "raid5f", 00:14:36.910 "superblock": true, 00:14:36.910 "num_base_bdevs": 4, 00:14:36.910 "num_base_bdevs_discovered": 4, 00:14:36.910 "num_base_bdevs_operational": 4, 00:14:36.910 "base_bdevs_list": [ 00:14:36.910 { 00:14:36.910 "name": "pt1", 00:14:36.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.910 "is_configured": true, 00:14:36.910 "data_offset": 2048, 00:14:36.910 "data_size": 63488 00:14:36.910 }, 00:14:36.910 { 00:14:36.910 "name": "pt2", 00:14:36.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.910 "is_configured": true, 00:14:36.910 "data_offset": 2048, 00:14:36.910 "data_size": 63488 00:14:36.910 }, 00:14:36.910 { 00:14:36.910 "name": "pt3", 00:14:36.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.910 "is_configured": true, 00:14:36.910 "data_offset": 2048, 00:14:36.910 "data_size": 63488 00:14:36.910 }, 00:14:36.910 { 00:14:36.910 "name": "pt4", 00:14:36.910 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.910 "is_configured": true, 00:14:36.910 "data_offset": 2048, 00:14:36.910 "data_size": 63488 00:14:36.910 } 00:14:36.910 ] 00:14:36.910 }' 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.910 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.481 [2024-11-26 12:57:54.964703] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.481 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.481 "name": "raid_bdev1", 00:14:37.481 "aliases": [ 00:14:37.481 "fe4b68ef-4ae8-417b-83be-31e82c6d83fb" 00:14:37.481 ], 00:14:37.481 "product_name": "Raid Volume", 00:14:37.481 "block_size": 512, 00:14:37.481 "num_blocks": 190464, 00:14:37.481 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:37.481 "assigned_rate_limits": { 00:14:37.481 "rw_ios_per_sec": 0, 00:14:37.481 "rw_mbytes_per_sec": 0, 00:14:37.481 "r_mbytes_per_sec": 0, 00:14:37.481 "w_mbytes_per_sec": 0 00:14:37.481 }, 00:14:37.481 "claimed": false, 00:14:37.481 "zoned": false, 00:14:37.481 "supported_io_types": { 00:14:37.481 "read": true, 00:14:37.481 "write": true, 00:14:37.481 "unmap": false, 00:14:37.481 "flush": false, 00:14:37.481 "reset": true, 00:14:37.481 "nvme_admin": false, 00:14:37.481 "nvme_io": false, 00:14:37.481 "nvme_io_md": false, 00:14:37.481 "write_zeroes": true, 00:14:37.481 "zcopy": false, 00:14:37.481 "get_zone_info": false, 00:14:37.481 "zone_management": false, 00:14:37.481 "zone_append": false, 00:14:37.481 "compare": false, 00:14:37.481 "compare_and_write": false, 00:14:37.481 "abort": false, 00:14:37.481 "seek_hole": false, 00:14:37.482 "seek_data": false, 00:14:37.482 "copy": false, 00:14:37.482 "nvme_iov_md": false 00:14:37.482 }, 00:14:37.482 "driver_specific": { 00:14:37.482 "raid": { 00:14:37.482 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:37.482 "strip_size_kb": 64, 00:14:37.482 "state": "online", 00:14:37.482 "raid_level": "raid5f", 00:14:37.482 "superblock": true, 00:14:37.482 "num_base_bdevs": 4, 00:14:37.482 "num_base_bdevs_discovered": 4, 00:14:37.482 "num_base_bdevs_operational": 4, 00:14:37.482 "base_bdevs_list": [ 00:14:37.482 { 00:14:37.482 "name": "pt1", 00:14:37.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.482 "is_configured": true, 00:14:37.482 "data_offset": 2048, 00:14:37.482 "data_size": 63488 00:14:37.482 }, 00:14:37.482 { 00:14:37.482 "name": "pt2", 00:14:37.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.482 "is_configured": true, 00:14:37.482 "data_offset": 2048, 00:14:37.482 "data_size": 63488 00:14:37.482 }, 00:14:37.482 { 00:14:37.482 "name": "pt3", 00:14:37.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.482 "is_configured": true, 00:14:37.482 "data_offset": 2048, 00:14:37.482 "data_size": 63488 00:14:37.482 }, 00:14:37.482 { 00:14:37.482 "name": "pt4", 00:14:37.482 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:37.482 "is_configured": true, 00:14:37.482 "data_offset": 2048, 00:14:37.482 "data_size": 63488 00:14:37.482 } 00:14:37.482 ] 00:14:37.482 } 00:14:37.482 } 00:14:37.482 }' 00:14:37.482 12:57:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:37.482 pt2 00:14:37.482 pt3 00:14:37.482 pt4' 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.482 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 [2024-11-26 12:57:55.272215] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fe4b68ef-4ae8-417b-83be-31e82c6d83fb 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fe4b68ef-4ae8-417b-83be-31e82c6d83fb ']' 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 [2024-11-26 12:57:55.307979] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.743 [2024-11-26 12:57:55.308054] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.743 [2024-11-26 12:57:55.308116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.743 [2024-11-26 12:57:55.308195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.743 [2024-11-26 12:57:55.308205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.743 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.003 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.003 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.004 [2024-11-26 12:57:55.467778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:38.004 [2024-11-26 12:57:55.469626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:38.004 [2024-11-26 12:57:55.469723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:38.004 [2024-11-26 12:57:55.469768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:38.004 [2024-11-26 12:57:55.469855] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:38.004 [2024-11-26 12:57:55.469919] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:38.004 [2024-11-26 12:57:55.469965] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:38.004 [2024-11-26 12:57:55.470006] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:38.004 [2024-11-26 12:57:55.470069] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.004 [2024-11-26 12:57:55.470082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:14:38.004 request: 00:14:38.004 { 00:14:38.004 "name": "raid_bdev1", 00:14:38.004 "raid_level": "raid5f", 00:14:38.004 "base_bdevs": [ 00:14:38.004 "malloc1", 00:14:38.004 "malloc2", 00:14:38.004 "malloc3", 00:14:38.004 "malloc4" 00:14:38.004 ], 00:14:38.004 "strip_size_kb": 64, 00:14:38.004 "superblock": false, 00:14:38.004 "method": "bdev_raid_create", 00:14:38.004 "req_id": 1 00:14:38.004 } 00:14:38.004 Got JSON-RPC error response 00:14:38.004 response: 00:14:38.004 { 00:14:38.004 "code": -17, 00:14:38.004 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:38.004 } 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.004 [2024-11-26 12:57:55.539613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:38.004 [2024-11-26 12:57:55.539709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.004 [2024-11-26 12:57:55.539730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:38.004 [2024-11-26 12:57:55.539738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.004 [2024-11-26 12:57:55.541717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.004 [2024-11-26 12:57:55.541750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:38.004 [2024-11-26 12:57:55.541801] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:38.004 [2024-11-26 12:57:55.541836] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:38.004 pt1 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.004 "name": "raid_bdev1", 00:14:38.004 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:38.004 "strip_size_kb": 64, 00:14:38.004 "state": "configuring", 00:14:38.004 "raid_level": "raid5f", 00:14:38.004 "superblock": true, 00:14:38.004 "num_base_bdevs": 4, 00:14:38.004 "num_base_bdevs_discovered": 1, 00:14:38.004 "num_base_bdevs_operational": 4, 00:14:38.004 "base_bdevs_list": [ 00:14:38.004 { 00:14:38.004 "name": "pt1", 00:14:38.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.004 "is_configured": true, 00:14:38.004 "data_offset": 2048, 00:14:38.004 "data_size": 63488 00:14:38.004 }, 00:14:38.004 { 00:14:38.004 "name": null, 00:14:38.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.004 "is_configured": false, 00:14:38.004 "data_offset": 2048, 00:14:38.004 "data_size": 63488 00:14:38.004 }, 00:14:38.004 { 00:14:38.004 "name": null, 00:14:38.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.004 "is_configured": false, 00:14:38.004 "data_offset": 2048, 00:14:38.004 "data_size": 63488 00:14:38.004 }, 00:14:38.004 { 00:14:38.004 "name": null, 00:14:38.004 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.004 "is_configured": false, 00:14:38.004 "data_offset": 2048, 00:14:38.004 "data_size": 63488 00:14:38.004 } 00:14:38.004 ] 00:14:38.004 }' 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.004 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.573 [2024-11-26 12:57:55.974865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.573 [2024-11-26 12:57:55.974974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.573 [2024-11-26 12:57:55.975006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:38.573 [2024-11-26 12:57:55.975033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.573 [2024-11-26 12:57:55.975384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.573 [2024-11-26 12:57:55.975440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.573 [2024-11-26 12:57:55.975518] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:38.573 [2024-11-26 12:57:55.975564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.573 pt2 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.573 [2024-11-26 12:57:55.986862] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.573 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.573 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.573 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.573 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.573 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.573 "name": "raid_bdev1", 00:14:38.573 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:38.573 "strip_size_kb": 64, 00:14:38.573 "state": "configuring", 00:14:38.573 "raid_level": "raid5f", 00:14:38.573 "superblock": true, 00:14:38.573 "num_base_bdevs": 4, 00:14:38.573 "num_base_bdevs_discovered": 1, 00:14:38.573 "num_base_bdevs_operational": 4, 00:14:38.573 "base_bdevs_list": [ 00:14:38.573 { 00:14:38.573 "name": "pt1", 00:14:38.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.573 "is_configured": true, 00:14:38.573 "data_offset": 2048, 00:14:38.573 "data_size": 63488 00:14:38.573 }, 00:14:38.573 { 00:14:38.573 "name": null, 00:14:38.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.573 "is_configured": false, 00:14:38.573 "data_offset": 0, 00:14:38.573 "data_size": 63488 00:14:38.573 }, 00:14:38.573 { 00:14:38.573 "name": null, 00:14:38.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.574 "is_configured": false, 00:14:38.574 "data_offset": 2048, 00:14:38.574 "data_size": 63488 00:14:38.574 }, 00:14:38.574 { 00:14:38.574 "name": null, 00:14:38.574 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.574 "is_configured": false, 00:14:38.574 "data_offset": 2048, 00:14:38.574 "data_size": 63488 00:14:38.574 } 00:14:38.574 ] 00:14:38.574 }' 00:14:38.574 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.574 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.834 [2024-11-26 12:57:56.422106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.834 [2024-11-26 12:57:56.422157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.834 [2024-11-26 12:57:56.422171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:38.834 [2024-11-26 12:57:56.422189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.834 [2024-11-26 12:57:56.422466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.834 [2024-11-26 12:57:56.422483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.834 [2024-11-26 12:57:56.422528] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:38.834 [2024-11-26 12:57:56.422545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.834 pt2 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.834 [2024-11-26 12:57:56.434061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:38.834 [2024-11-26 12:57:56.434160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.834 [2024-11-26 12:57:56.434204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:38.834 [2024-11-26 12:57:56.434216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.834 [2024-11-26 12:57:56.434516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.834 [2024-11-26 12:57:56.434536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:38.834 [2024-11-26 12:57:56.434582] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:38.834 [2024-11-26 12:57:56.434600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:38.834 pt3 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.834 [2024-11-26 12:57:56.446050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:38.834 [2024-11-26 12:57:56.446100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.834 [2024-11-26 12:57:56.446113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:38.834 [2024-11-26 12:57:56.446122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.834 [2024-11-26 12:57:56.446396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.834 [2024-11-26 12:57:56.446415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:38.834 [2024-11-26 12:57:56.446459] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:38.834 [2024-11-26 12:57:56.446476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:38.834 [2024-11-26 12:57:56.446564] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:38.834 [2024-11-26 12:57:56.446575] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:38.834 [2024-11-26 12:57:56.446799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:38.834 [2024-11-26 12:57:56.447256] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:38.834 [2024-11-26 12:57:56.447325] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:38.834 [2024-11-26 12:57:56.447423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.834 pt4 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.834 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.834 "name": "raid_bdev1", 00:14:38.834 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:38.834 "strip_size_kb": 64, 00:14:38.834 "state": "online", 00:14:38.834 "raid_level": "raid5f", 00:14:38.834 "superblock": true, 00:14:38.834 "num_base_bdevs": 4, 00:14:38.834 "num_base_bdevs_discovered": 4, 00:14:38.834 "num_base_bdevs_operational": 4, 00:14:38.834 "base_bdevs_list": [ 00:14:38.834 { 00:14:38.834 "name": "pt1", 00:14:38.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.834 "is_configured": true, 00:14:38.834 "data_offset": 2048, 00:14:38.834 "data_size": 63488 00:14:38.834 }, 00:14:38.834 { 00:14:38.834 "name": "pt2", 00:14:38.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.834 "is_configured": true, 00:14:38.834 "data_offset": 2048, 00:14:38.834 "data_size": 63488 00:14:38.834 }, 00:14:38.834 { 00:14:38.834 "name": "pt3", 00:14:38.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.834 "is_configured": true, 00:14:38.834 "data_offset": 2048, 00:14:38.834 "data_size": 63488 00:14:38.834 }, 00:14:38.834 { 00:14:38.834 "name": "pt4", 00:14:38.834 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.834 "is_configured": true, 00:14:38.834 "data_offset": 2048, 00:14:38.834 "data_size": 63488 00:14:38.835 } 00:14:38.835 ] 00:14:38.835 }' 00:14:38.835 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.835 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.405 [2024-11-26 12:57:56.949347] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.405 "name": "raid_bdev1", 00:14:39.405 "aliases": [ 00:14:39.405 "fe4b68ef-4ae8-417b-83be-31e82c6d83fb" 00:14:39.405 ], 00:14:39.405 "product_name": "Raid Volume", 00:14:39.405 "block_size": 512, 00:14:39.405 "num_blocks": 190464, 00:14:39.405 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:39.405 "assigned_rate_limits": { 00:14:39.405 "rw_ios_per_sec": 0, 00:14:39.405 "rw_mbytes_per_sec": 0, 00:14:39.405 "r_mbytes_per_sec": 0, 00:14:39.405 "w_mbytes_per_sec": 0 00:14:39.405 }, 00:14:39.405 "claimed": false, 00:14:39.405 "zoned": false, 00:14:39.405 "supported_io_types": { 00:14:39.405 "read": true, 00:14:39.405 "write": true, 00:14:39.405 "unmap": false, 00:14:39.405 "flush": false, 00:14:39.405 "reset": true, 00:14:39.405 "nvme_admin": false, 00:14:39.405 "nvme_io": false, 00:14:39.405 "nvme_io_md": false, 00:14:39.405 "write_zeroes": true, 00:14:39.405 "zcopy": false, 00:14:39.405 "get_zone_info": false, 00:14:39.405 "zone_management": false, 00:14:39.405 "zone_append": false, 00:14:39.405 "compare": false, 00:14:39.405 "compare_and_write": false, 00:14:39.405 "abort": false, 00:14:39.405 "seek_hole": false, 00:14:39.405 "seek_data": false, 00:14:39.405 "copy": false, 00:14:39.405 "nvme_iov_md": false 00:14:39.405 }, 00:14:39.405 "driver_specific": { 00:14:39.405 "raid": { 00:14:39.405 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:39.405 "strip_size_kb": 64, 00:14:39.405 "state": "online", 00:14:39.405 "raid_level": "raid5f", 00:14:39.405 "superblock": true, 00:14:39.405 "num_base_bdevs": 4, 00:14:39.405 "num_base_bdevs_discovered": 4, 00:14:39.405 "num_base_bdevs_operational": 4, 00:14:39.405 "base_bdevs_list": [ 00:14:39.405 { 00:14:39.405 "name": "pt1", 00:14:39.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.405 "is_configured": true, 00:14:39.405 "data_offset": 2048, 00:14:39.405 "data_size": 63488 00:14:39.405 }, 00:14:39.405 { 00:14:39.405 "name": "pt2", 00:14:39.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.405 "is_configured": true, 00:14:39.405 "data_offset": 2048, 00:14:39.405 "data_size": 63488 00:14:39.405 }, 00:14:39.405 { 00:14:39.405 "name": "pt3", 00:14:39.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.405 "is_configured": true, 00:14:39.405 "data_offset": 2048, 00:14:39.405 "data_size": 63488 00:14:39.405 }, 00:14:39.405 { 00:14:39.405 "name": "pt4", 00:14:39.405 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:39.405 "is_configured": true, 00:14:39.405 "data_offset": 2048, 00:14:39.405 "data_size": 63488 00:14:39.405 } 00:14:39.405 ] 00:14:39.405 } 00:14:39.405 } 00:14:39.405 }' 00:14:39.405 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.405 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:39.405 pt2 00:14:39.405 pt3 00:14:39.405 pt4' 00:14:39.405 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.405 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.405 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.405 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.405 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:39.405 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.405 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:39.665 [2024-11-26 12:57:57.252803] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fe4b68ef-4ae8-417b-83be-31e82c6d83fb '!=' fe4b68ef-4ae8-417b-83be-31e82c6d83fb ']' 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.665 [2024-11-26 12:57:57.300596] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.665 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.925 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.925 "name": "raid_bdev1", 00:14:39.925 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:39.925 "strip_size_kb": 64, 00:14:39.925 "state": "online", 00:14:39.925 "raid_level": "raid5f", 00:14:39.925 "superblock": true, 00:14:39.925 "num_base_bdevs": 4, 00:14:39.925 "num_base_bdevs_discovered": 3, 00:14:39.925 "num_base_bdevs_operational": 3, 00:14:39.925 "base_bdevs_list": [ 00:14:39.925 { 00:14:39.925 "name": null, 00:14:39.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.925 "is_configured": false, 00:14:39.925 "data_offset": 0, 00:14:39.925 "data_size": 63488 00:14:39.925 }, 00:14:39.925 { 00:14:39.925 "name": "pt2", 00:14:39.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.925 "is_configured": true, 00:14:39.925 "data_offset": 2048, 00:14:39.925 "data_size": 63488 00:14:39.925 }, 00:14:39.925 { 00:14:39.925 "name": "pt3", 00:14:39.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.925 "is_configured": true, 00:14:39.925 "data_offset": 2048, 00:14:39.925 "data_size": 63488 00:14:39.925 }, 00:14:39.925 { 00:14:39.925 "name": "pt4", 00:14:39.925 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:39.925 "is_configured": true, 00:14:39.925 "data_offset": 2048, 00:14:39.925 "data_size": 63488 00:14:39.925 } 00:14:39.925 ] 00:14:39.925 }' 00:14:39.925 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.925 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.185 [2024-11-26 12:57:57.775758] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.185 [2024-11-26 12:57:57.775828] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.185 [2024-11-26 12:57:57.775894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.185 [2024-11-26 12:57:57.775974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.185 [2024-11-26 12:57:57.776007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.185 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.185 [2024-11-26 12:57:57.859659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.185 [2024-11-26 12:57:57.859759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.185 [2024-11-26 12:57:57.859791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:40.185 [2024-11-26 12:57:57.859820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.185 [2024-11-26 12:57:57.861899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.185 [2024-11-26 12:57:57.861989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.185 [2024-11-26 12:57:57.862062] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:40.185 [2024-11-26 12:57:57.862110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.446 pt2 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.446 "name": "raid_bdev1", 00:14:40.446 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:40.446 "strip_size_kb": 64, 00:14:40.446 "state": "configuring", 00:14:40.446 "raid_level": "raid5f", 00:14:40.446 "superblock": true, 00:14:40.446 "num_base_bdevs": 4, 00:14:40.446 "num_base_bdevs_discovered": 1, 00:14:40.446 "num_base_bdevs_operational": 3, 00:14:40.446 "base_bdevs_list": [ 00:14:40.446 { 00:14:40.446 "name": null, 00:14:40.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.446 "is_configured": false, 00:14:40.446 "data_offset": 2048, 00:14:40.446 "data_size": 63488 00:14:40.446 }, 00:14:40.446 { 00:14:40.446 "name": "pt2", 00:14:40.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.446 "is_configured": true, 00:14:40.446 "data_offset": 2048, 00:14:40.446 "data_size": 63488 00:14:40.446 }, 00:14:40.446 { 00:14:40.446 "name": null, 00:14:40.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.446 "is_configured": false, 00:14:40.446 "data_offset": 2048, 00:14:40.446 "data_size": 63488 00:14:40.446 }, 00:14:40.446 { 00:14:40.446 "name": null, 00:14:40.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:40.446 "is_configured": false, 00:14:40.446 "data_offset": 2048, 00:14:40.446 "data_size": 63488 00:14:40.446 } 00:14:40.446 ] 00:14:40.446 }' 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.446 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.706 [2024-11-26 12:57:58.294944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.706 [2024-11-26 12:57:58.295039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.706 [2024-11-26 12:57:58.295056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:40.706 [2024-11-26 12:57:58.295066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.706 [2024-11-26 12:57:58.295385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.706 [2024-11-26 12:57:58.295406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.706 [2024-11-26 12:57:58.295452] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:40.706 [2024-11-26 12:57:58.295478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.706 pt3 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.706 "name": "raid_bdev1", 00:14:40.706 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:40.706 "strip_size_kb": 64, 00:14:40.706 "state": "configuring", 00:14:40.706 "raid_level": "raid5f", 00:14:40.706 "superblock": true, 00:14:40.706 "num_base_bdevs": 4, 00:14:40.706 "num_base_bdevs_discovered": 2, 00:14:40.706 "num_base_bdevs_operational": 3, 00:14:40.706 "base_bdevs_list": [ 00:14:40.706 { 00:14:40.706 "name": null, 00:14:40.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.706 "is_configured": false, 00:14:40.706 "data_offset": 2048, 00:14:40.706 "data_size": 63488 00:14:40.706 }, 00:14:40.706 { 00:14:40.706 "name": "pt2", 00:14:40.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.706 "is_configured": true, 00:14:40.706 "data_offset": 2048, 00:14:40.706 "data_size": 63488 00:14:40.706 }, 00:14:40.706 { 00:14:40.706 "name": "pt3", 00:14:40.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.706 "is_configured": true, 00:14:40.706 "data_offset": 2048, 00:14:40.706 "data_size": 63488 00:14:40.706 }, 00:14:40.706 { 00:14:40.706 "name": null, 00:14:40.706 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:40.706 "is_configured": false, 00:14:40.706 "data_offset": 2048, 00:14:40.706 "data_size": 63488 00:14:40.706 } 00:14:40.706 ] 00:14:40.706 }' 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.706 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.276 [2024-11-26 12:57:58.706242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:41.276 [2024-11-26 12:57:58.706343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.276 [2024-11-26 12:57:58.706377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:41.276 [2024-11-26 12:57:58.706406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.276 [2024-11-26 12:57:58.706720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.276 [2024-11-26 12:57:58.706779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:41.276 [2024-11-26 12:57:58.706852] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:41.276 [2024-11-26 12:57:58.706900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:41.276 [2024-11-26 12:57:58.707006] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:41.276 [2024-11-26 12:57:58.707046] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:41.276 [2024-11-26 12:57:58.707292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:41.276 [2024-11-26 12:57:58.707835] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:41.276 [2024-11-26 12:57:58.707886] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:41.276 [2024-11-26 12:57:58.708144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.276 pt4 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.276 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.277 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.277 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.277 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.277 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.277 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.277 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.277 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.277 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.277 "name": "raid_bdev1", 00:14:41.277 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:41.277 "strip_size_kb": 64, 00:14:41.277 "state": "online", 00:14:41.277 "raid_level": "raid5f", 00:14:41.277 "superblock": true, 00:14:41.277 "num_base_bdevs": 4, 00:14:41.277 "num_base_bdevs_discovered": 3, 00:14:41.277 "num_base_bdevs_operational": 3, 00:14:41.277 "base_bdevs_list": [ 00:14:41.277 { 00:14:41.277 "name": null, 00:14:41.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.277 "is_configured": false, 00:14:41.277 "data_offset": 2048, 00:14:41.277 "data_size": 63488 00:14:41.277 }, 00:14:41.277 { 00:14:41.277 "name": "pt2", 00:14:41.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.277 "is_configured": true, 00:14:41.277 "data_offset": 2048, 00:14:41.277 "data_size": 63488 00:14:41.277 }, 00:14:41.277 { 00:14:41.277 "name": "pt3", 00:14:41.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.277 "is_configured": true, 00:14:41.277 "data_offset": 2048, 00:14:41.277 "data_size": 63488 00:14:41.277 }, 00:14:41.277 { 00:14:41.277 "name": "pt4", 00:14:41.277 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:41.277 "is_configured": true, 00:14:41.277 "data_offset": 2048, 00:14:41.277 "data_size": 63488 00:14:41.277 } 00:14:41.277 ] 00:14:41.277 }' 00:14:41.277 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.277 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.537 [2024-11-26 12:57:59.133501] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.537 [2024-11-26 12:57:59.133526] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.537 [2024-11-26 12:57:59.133575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.537 [2024-11-26 12:57:59.133633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.537 [2024-11-26 12:57:59.133642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.537 [2024-11-26 12:57:59.205393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.537 [2024-11-26 12:57:59.205502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.537 [2024-11-26 12:57:59.205538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:41.537 [2024-11-26 12:57:59.205566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.537 [2024-11-26 12:57:59.207741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.537 [2024-11-26 12:57:59.207829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.537 [2024-11-26 12:57:59.207927] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.537 [2024-11-26 12:57:59.207987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.537 [2024-11-26 12:57:59.208099] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:41.537 [2024-11-26 12:57:59.208155] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.537 [2024-11-26 12:57:59.208206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:41.537 [2024-11-26 12:57:59.208291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.537 [2024-11-26 12:57:59.208451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:41.537 pt1 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.537 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.538 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.538 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.538 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.538 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.538 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.538 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.538 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.797 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.797 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.797 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.797 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.797 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.797 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.797 "name": "raid_bdev1", 00:14:41.797 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:41.797 "strip_size_kb": 64, 00:14:41.797 "state": "configuring", 00:14:41.797 "raid_level": "raid5f", 00:14:41.797 "superblock": true, 00:14:41.797 "num_base_bdevs": 4, 00:14:41.797 "num_base_bdevs_discovered": 2, 00:14:41.797 "num_base_bdevs_operational": 3, 00:14:41.797 "base_bdevs_list": [ 00:14:41.797 { 00:14:41.797 "name": null, 00:14:41.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.797 "is_configured": false, 00:14:41.797 "data_offset": 2048, 00:14:41.797 "data_size": 63488 00:14:41.797 }, 00:14:41.797 { 00:14:41.797 "name": "pt2", 00:14:41.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.797 "is_configured": true, 00:14:41.797 "data_offset": 2048, 00:14:41.797 "data_size": 63488 00:14:41.797 }, 00:14:41.797 { 00:14:41.797 "name": "pt3", 00:14:41.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.797 "is_configured": true, 00:14:41.797 "data_offset": 2048, 00:14:41.797 "data_size": 63488 00:14:41.797 }, 00:14:41.797 { 00:14:41.797 "name": null, 00:14:41.797 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:41.797 "is_configured": false, 00:14:41.797 "data_offset": 2048, 00:14:41.797 "data_size": 63488 00:14:41.797 } 00:14:41.797 ] 00:14:41.797 }' 00:14:41.797 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.797 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.057 [2024-11-26 12:57:59.704522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:42.057 [2024-11-26 12:57:59.704635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.057 [2024-11-26 12:57:59.704666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:42.057 [2024-11-26 12:57:59.704697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.057 [2024-11-26 12:57:59.705037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.057 [2024-11-26 12:57:59.705098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:42.057 [2024-11-26 12:57:59.705191] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:42.057 [2024-11-26 12:57:59.705243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:42.057 [2024-11-26 12:57:59.705357] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:42.057 [2024-11-26 12:57:59.705402] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:42.057 [2024-11-26 12:57:59.705643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:42.057 [2024-11-26 12:57:59.706195] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:42.057 [2024-11-26 12:57:59.706247] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:42.057 [2024-11-26 12:57:59.706454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.057 pt4 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.057 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.058 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.058 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.058 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.058 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.058 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.316 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.316 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.316 "name": "raid_bdev1", 00:14:42.316 "uuid": "fe4b68ef-4ae8-417b-83be-31e82c6d83fb", 00:14:42.316 "strip_size_kb": 64, 00:14:42.316 "state": "online", 00:14:42.316 "raid_level": "raid5f", 00:14:42.316 "superblock": true, 00:14:42.316 "num_base_bdevs": 4, 00:14:42.316 "num_base_bdevs_discovered": 3, 00:14:42.316 "num_base_bdevs_operational": 3, 00:14:42.316 "base_bdevs_list": [ 00:14:42.317 { 00:14:42.317 "name": null, 00:14:42.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.317 "is_configured": false, 00:14:42.317 "data_offset": 2048, 00:14:42.317 "data_size": 63488 00:14:42.317 }, 00:14:42.317 { 00:14:42.317 "name": "pt2", 00:14:42.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.317 "is_configured": true, 00:14:42.317 "data_offset": 2048, 00:14:42.317 "data_size": 63488 00:14:42.317 }, 00:14:42.317 { 00:14:42.317 "name": "pt3", 00:14:42.317 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.317 "is_configured": true, 00:14:42.317 "data_offset": 2048, 00:14:42.317 "data_size": 63488 00:14:42.317 }, 00:14:42.317 { 00:14:42.317 "name": "pt4", 00:14:42.317 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:42.317 "is_configured": true, 00:14:42.317 "data_offset": 2048, 00:14:42.317 "data_size": 63488 00:14:42.317 } 00:14:42.317 ] 00:14:42.317 }' 00:14:42.317 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.317 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.576 [2024-11-26 12:58:00.155949] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fe4b68ef-4ae8-417b-83be-31e82c6d83fb '!=' fe4b68ef-4ae8-417b-83be-31e82c6d83fb ']' 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94782 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94782 ']' 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94782 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94782 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.576 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.577 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94782' 00:14:42.577 killing process with pid 94782 00:14:42.577 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94782 00:14:42.577 [2024-11-26 12:58:00.238944] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.577 [2024-11-26 12:58:00.239005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.577 [2024-11-26 12:58:00.239071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.577 [2024-11-26 12:58:00.239080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:42.577 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94782 00:14:42.836 [2024-11-26 12:58:00.282798] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.097 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:43.097 00:14:43.097 real 0m7.058s 00:14:43.097 user 0m11.801s 00:14:43.097 sys 0m1.581s 00:14:43.097 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:43.097 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.097 ************************************ 00:14:43.097 END TEST raid5f_superblock_test 00:14:43.097 ************************************ 00:14:43.097 12:58:00 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:43.097 12:58:00 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:43.097 12:58:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:43.097 12:58:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:43.097 12:58:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.097 ************************************ 00:14:43.097 START TEST raid5f_rebuild_test 00:14:43.097 ************************************ 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95252 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95252 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95252 ']' 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.097 12:58:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.097 [2024-11-26 12:58:00.728357] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:43.097 [2024-11-26 12:58:00.728546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95252 ] 00:14:43.097 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:43.097 Zero copy mechanism will not be used. 00:14:43.357 [2024-11-26 12:58:00.893043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.357 [2024-11-26 12:58:00.940480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.357 [2024-11-26 12:58:00.983185] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.357 [2024-11-26 12:58:00.983219] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.926 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.926 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:43.926 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:43.926 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:43.926 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.926 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.926 BaseBdev1_malloc 00:14:43.926 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.926 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:43.926 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.926 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.926 [2024-11-26 12:58:01.553513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:43.926 [2024-11-26 12:58:01.553582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.926 [2024-11-26 12:58:01.553606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:43.926 [2024-11-26 12:58:01.553619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.926 [2024-11-26 12:58:01.555618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.927 [2024-11-26 12:58:01.555747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:43.927 BaseBdev1 00:14:43.927 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.927 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:43.927 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:43.927 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.927 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.927 BaseBdev2_malloc 00:14:43.927 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.927 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:43.927 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.927 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.927 [2024-11-26 12:58:01.598870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:43.927 [2024-11-26 12:58:01.598977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.927 [2024-11-26 12:58:01.599023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:43.927 [2024-11-26 12:58:01.599045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.243 [2024-11-26 12:58:01.603742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.243 [2024-11-26 12:58:01.603812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.243 BaseBdev2 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 BaseBdev3_malloc 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 [2024-11-26 12:58:01.629978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:44.243 [2024-11-26 12:58:01.630031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.243 [2024-11-26 12:58:01.630054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:44.243 [2024-11-26 12:58:01.630062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.243 [2024-11-26 12:58:01.632056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.243 [2024-11-26 12:58:01.632199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:44.243 BaseBdev3 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 BaseBdev4_malloc 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 [2024-11-26 12:58:01.658543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:44.243 [2024-11-26 12:58:01.658687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.243 [2024-11-26 12:58:01.658719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:44.243 [2024-11-26 12:58:01.658728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.243 [2024-11-26 12:58:01.660724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.243 [2024-11-26 12:58:01.660772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:44.243 BaseBdev4 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 spare_malloc 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 spare_delay 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 [2024-11-26 12:58:01.698950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.243 [2024-11-26 12:58:01.699000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.243 [2024-11-26 12:58:01.699021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:44.243 [2024-11-26 12:58:01.699029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.243 [2024-11-26 12:58:01.701081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.243 [2024-11-26 12:58:01.701117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.243 spare 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 [2024-11-26 12:58:01.711009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.243 [2024-11-26 12:58:01.712778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.243 [2024-11-26 12:58:01.712918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.243 [2024-11-26 12:58:01.712961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.243 [2024-11-26 12:58:01.713057] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:44.243 [2024-11-26 12:58:01.713068] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:44.243 [2024-11-26 12:58:01.713321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:44.243 [2024-11-26 12:58:01.713745] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:44.243 [2024-11-26 12:58:01.713768] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:44.243 [2024-11-26 12:58:01.713884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.243 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.243 "name": "raid_bdev1", 00:14:44.244 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:44.244 "strip_size_kb": 64, 00:14:44.244 "state": "online", 00:14:44.244 "raid_level": "raid5f", 00:14:44.244 "superblock": false, 00:14:44.244 "num_base_bdevs": 4, 00:14:44.244 "num_base_bdevs_discovered": 4, 00:14:44.244 "num_base_bdevs_operational": 4, 00:14:44.244 "base_bdevs_list": [ 00:14:44.244 { 00:14:44.244 "name": "BaseBdev1", 00:14:44.244 "uuid": "324a7dff-602f-5424-9961-e93d9bb66fa9", 00:14:44.244 "is_configured": true, 00:14:44.244 "data_offset": 0, 00:14:44.244 "data_size": 65536 00:14:44.244 }, 00:14:44.244 { 00:14:44.244 "name": "BaseBdev2", 00:14:44.244 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:44.244 "is_configured": true, 00:14:44.244 "data_offset": 0, 00:14:44.244 "data_size": 65536 00:14:44.244 }, 00:14:44.244 { 00:14:44.244 "name": "BaseBdev3", 00:14:44.244 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:44.244 "is_configured": true, 00:14:44.244 "data_offset": 0, 00:14:44.244 "data_size": 65536 00:14:44.244 }, 00:14:44.244 { 00:14:44.244 "name": "BaseBdev4", 00:14:44.244 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:44.244 "is_configured": true, 00:14:44.244 "data_offset": 0, 00:14:44.244 "data_size": 65536 00:14:44.244 } 00:14:44.244 ] 00:14:44.244 }' 00:14:44.244 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.244 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.503 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:44.503 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.503 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.503 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:44.503 [2024-11-26 12:58:02.170986] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.762 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:45.022 [2024-11-26 12:58:02.478336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:45.022 /dev/nbd0 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.022 1+0 records in 00:14:45.022 1+0 records out 00:14:45.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003421 s, 12.0 MB/s 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:45.022 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:45.589 512+0 records in 00:14:45.589 512+0 records out 00:14:45.589 100663296 bytes (101 MB, 96 MiB) copied, 0.580037 s, 174 MB/s 00:14:45.589 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:45.589 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.589 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:45.589 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.589 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:45.589 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.589 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:45.849 [2024-11-26 12:58:03.345349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.849 [2024-11-26 12:58:03.359362] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.849 "name": "raid_bdev1", 00:14:45.849 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:45.849 "strip_size_kb": 64, 00:14:45.849 "state": "online", 00:14:45.849 "raid_level": "raid5f", 00:14:45.849 "superblock": false, 00:14:45.849 "num_base_bdevs": 4, 00:14:45.849 "num_base_bdevs_discovered": 3, 00:14:45.849 "num_base_bdevs_operational": 3, 00:14:45.849 "base_bdevs_list": [ 00:14:45.849 { 00:14:45.849 "name": null, 00:14:45.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.849 "is_configured": false, 00:14:45.849 "data_offset": 0, 00:14:45.849 "data_size": 65536 00:14:45.849 }, 00:14:45.849 { 00:14:45.849 "name": "BaseBdev2", 00:14:45.849 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:45.849 "is_configured": true, 00:14:45.849 "data_offset": 0, 00:14:45.849 "data_size": 65536 00:14:45.849 }, 00:14:45.849 { 00:14:45.849 "name": "BaseBdev3", 00:14:45.849 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:45.849 "is_configured": true, 00:14:45.849 "data_offset": 0, 00:14:45.849 "data_size": 65536 00:14:45.849 }, 00:14:45.849 { 00:14:45.849 "name": "BaseBdev4", 00:14:45.849 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:45.849 "is_configured": true, 00:14:45.849 "data_offset": 0, 00:14:45.849 "data_size": 65536 00:14:45.849 } 00:14:45.849 ] 00:14:45.849 }' 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.849 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.418 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.418 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.418 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.418 [2024-11-26 12:58:03.818573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.418 [2024-11-26 12:58:03.821955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:46.418 [2024-11-26 12:58:03.824119] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.418 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.418 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.367 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.367 "name": "raid_bdev1", 00:14:47.367 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:47.367 "strip_size_kb": 64, 00:14:47.367 "state": "online", 00:14:47.367 "raid_level": "raid5f", 00:14:47.367 "superblock": false, 00:14:47.367 "num_base_bdevs": 4, 00:14:47.367 "num_base_bdevs_discovered": 4, 00:14:47.367 "num_base_bdevs_operational": 4, 00:14:47.367 "process": { 00:14:47.367 "type": "rebuild", 00:14:47.367 "target": "spare", 00:14:47.367 "progress": { 00:14:47.367 "blocks": 19200, 00:14:47.367 "percent": 9 00:14:47.367 } 00:14:47.367 }, 00:14:47.367 "base_bdevs_list": [ 00:14:47.367 { 00:14:47.367 "name": "spare", 00:14:47.367 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:47.367 "is_configured": true, 00:14:47.367 "data_offset": 0, 00:14:47.367 "data_size": 65536 00:14:47.367 }, 00:14:47.367 { 00:14:47.367 "name": "BaseBdev2", 00:14:47.367 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:47.367 "is_configured": true, 00:14:47.367 "data_offset": 0, 00:14:47.367 "data_size": 65536 00:14:47.367 }, 00:14:47.367 { 00:14:47.367 "name": "BaseBdev3", 00:14:47.367 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:47.367 "is_configured": true, 00:14:47.367 "data_offset": 0, 00:14:47.367 "data_size": 65536 00:14:47.367 }, 00:14:47.367 { 00:14:47.367 "name": "BaseBdev4", 00:14:47.367 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:47.367 "is_configured": true, 00:14:47.367 "data_offset": 0, 00:14:47.367 "data_size": 65536 00:14:47.368 } 00:14:47.368 ] 00:14:47.368 }' 00:14:47.368 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.368 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.368 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.368 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.368 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:47.368 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.368 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.368 [2024-11-26 12:58:04.983313] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.368 [2024-11-26 12:58:05.029443] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:47.368 [2024-11-26 12:58:05.029498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.368 [2024-11-26 12:58:05.029520] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.368 [2024-11-26 12:58:05.029534] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.645 "name": "raid_bdev1", 00:14:47.645 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:47.645 "strip_size_kb": 64, 00:14:47.645 "state": "online", 00:14:47.645 "raid_level": "raid5f", 00:14:47.645 "superblock": false, 00:14:47.645 "num_base_bdevs": 4, 00:14:47.645 "num_base_bdevs_discovered": 3, 00:14:47.645 "num_base_bdevs_operational": 3, 00:14:47.645 "base_bdevs_list": [ 00:14:47.645 { 00:14:47.645 "name": null, 00:14:47.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.645 "is_configured": false, 00:14:47.645 "data_offset": 0, 00:14:47.645 "data_size": 65536 00:14:47.645 }, 00:14:47.645 { 00:14:47.645 "name": "BaseBdev2", 00:14:47.645 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:47.645 "is_configured": true, 00:14:47.645 "data_offset": 0, 00:14:47.645 "data_size": 65536 00:14:47.645 }, 00:14:47.645 { 00:14:47.645 "name": "BaseBdev3", 00:14:47.645 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:47.645 "is_configured": true, 00:14:47.645 "data_offset": 0, 00:14:47.645 "data_size": 65536 00:14:47.645 }, 00:14:47.645 { 00:14:47.645 "name": "BaseBdev4", 00:14:47.645 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:47.645 "is_configured": true, 00:14:47.645 "data_offset": 0, 00:14:47.645 "data_size": 65536 00:14:47.645 } 00:14:47.645 ] 00:14:47.645 }' 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.645 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.905 "name": "raid_bdev1", 00:14:47.905 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:47.905 "strip_size_kb": 64, 00:14:47.905 "state": "online", 00:14:47.905 "raid_level": "raid5f", 00:14:47.905 "superblock": false, 00:14:47.905 "num_base_bdevs": 4, 00:14:47.905 "num_base_bdevs_discovered": 3, 00:14:47.905 "num_base_bdevs_operational": 3, 00:14:47.905 "base_bdevs_list": [ 00:14:47.905 { 00:14:47.905 "name": null, 00:14:47.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.905 "is_configured": false, 00:14:47.905 "data_offset": 0, 00:14:47.905 "data_size": 65536 00:14:47.905 }, 00:14:47.905 { 00:14:47.905 "name": "BaseBdev2", 00:14:47.905 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:47.905 "is_configured": true, 00:14:47.905 "data_offset": 0, 00:14:47.905 "data_size": 65536 00:14:47.905 }, 00:14:47.905 { 00:14:47.905 "name": "BaseBdev3", 00:14:47.905 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:47.905 "is_configured": true, 00:14:47.905 "data_offset": 0, 00:14:47.905 "data_size": 65536 00:14:47.905 }, 00:14:47.905 { 00:14:47.905 "name": "BaseBdev4", 00:14:47.905 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:47.905 "is_configured": true, 00:14:47.905 "data_offset": 0, 00:14:47.905 "data_size": 65536 00:14:47.905 } 00:14:47.905 ] 00:14:47.905 }' 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.905 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.165 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.165 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:48.165 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.165 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.165 [2024-11-26 12:58:05.621874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.165 [2024-11-26 12:58:05.625000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:48.165 [2024-11-26 12:58:05.627214] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.165 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.165 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.105 "name": "raid_bdev1", 00:14:49.105 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:49.105 "strip_size_kb": 64, 00:14:49.105 "state": "online", 00:14:49.105 "raid_level": "raid5f", 00:14:49.105 "superblock": false, 00:14:49.105 "num_base_bdevs": 4, 00:14:49.105 "num_base_bdevs_discovered": 4, 00:14:49.105 "num_base_bdevs_operational": 4, 00:14:49.105 "process": { 00:14:49.105 "type": "rebuild", 00:14:49.105 "target": "spare", 00:14:49.105 "progress": { 00:14:49.105 "blocks": 19200, 00:14:49.105 "percent": 9 00:14:49.105 } 00:14:49.105 }, 00:14:49.105 "base_bdevs_list": [ 00:14:49.105 { 00:14:49.105 "name": "spare", 00:14:49.105 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:49.105 "is_configured": true, 00:14:49.105 "data_offset": 0, 00:14:49.105 "data_size": 65536 00:14:49.105 }, 00:14:49.105 { 00:14:49.105 "name": "BaseBdev2", 00:14:49.105 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:49.105 "is_configured": true, 00:14:49.105 "data_offset": 0, 00:14:49.105 "data_size": 65536 00:14:49.105 }, 00:14:49.105 { 00:14:49.105 "name": "BaseBdev3", 00:14:49.105 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:49.105 "is_configured": true, 00:14:49.105 "data_offset": 0, 00:14:49.105 "data_size": 65536 00:14:49.105 }, 00:14:49.105 { 00:14:49.105 "name": "BaseBdev4", 00:14:49.105 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:49.105 "is_configured": true, 00:14:49.105 "data_offset": 0, 00:14:49.105 "data_size": 65536 00:14:49.105 } 00:14:49.105 ] 00:14:49.105 }' 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=510 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.105 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.106 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.106 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.106 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.106 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.106 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.106 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.106 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.106 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.365 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.365 "name": "raid_bdev1", 00:14:49.365 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:49.365 "strip_size_kb": 64, 00:14:49.365 "state": "online", 00:14:49.365 "raid_level": "raid5f", 00:14:49.365 "superblock": false, 00:14:49.365 "num_base_bdevs": 4, 00:14:49.365 "num_base_bdevs_discovered": 4, 00:14:49.365 "num_base_bdevs_operational": 4, 00:14:49.365 "process": { 00:14:49.365 "type": "rebuild", 00:14:49.365 "target": "spare", 00:14:49.365 "progress": { 00:14:49.365 "blocks": 21120, 00:14:49.365 "percent": 10 00:14:49.365 } 00:14:49.365 }, 00:14:49.365 "base_bdevs_list": [ 00:14:49.365 { 00:14:49.365 "name": "spare", 00:14:49.365 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:49.365 "is_configured": true, 00:14:49.365 "data_offset": 0, 00:14:49.365 "data_size": 65536 00:14:49.365 }, 00:14:49.365 { 00:14:49.365 "name": "BaseBdev2", 00:14:49.365 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:49.365 "is_configured": true, 00:14:49.365 "data_offset": 0, 00:14:49.365 "data_size": 65536 00:14:49.365 }, 00:14:49.365 { 00:14:49.365 "name": "BaseBdev3", 00:14:49.365 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:49.365 "is_configured": true, 00:14:49.365 "data_offset": 0, 00:14:49.365 "data_size": 65536 00:14:49.365 }, 00:14:49.365 { 00:14:49.365 "name": "BaseBdev4", 00:14:49.365 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:49.365 "is_configured": true, 00:14:49.365 "data_offset": 0, 00:14:49.365 "data_size": 65536 00:14:49.365 } 00:14:49.365 ] 00:14:49.365 }' 00:14:49.365 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.365 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.365 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.365 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.365 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.301 "name": "raid_bdev1", 00:14:50.301 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:50.301 "strip_size_kb": 64, 00:14:50.301 "state": "online", 00:14:50.301 "raid_level": "raid5f", 00:14:50.301 "superblock": false, 00:14:50.301 "num_base_bdevs": 4, 00:14:50.301 "num_base_bdevs_discovered": 4, 00:14:50.301 "num_base_bdevs_operational": 4, 00:14:50.301 "process": { 00:14:50.301 "type": "rebuild", 00:14:50.301 "target": "spare", 00:14:50.301 "progress": { 00:14:50.301 "blocks": 42240, 00:14:50.301 "percent": 21 00:14:50.301 } 00:14:50.301 }, 00:14:50.301 "base_bdevs_list": [ 00:14:50.301 { 00:14:50.301 "name": "spare", 00:14:50.301 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:50.301 "is_configured": true, 00:14:50.301 "data_offset": 0, 00:14:50.301 "data_size": 65536 00:14:50.301 }, 00:14:50.301 { 00:14:50.301 "name": "BaseBdev2", 00:14:50.301 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:50.301 "is_configured": true, 00:14:50.301 "data_offset": 0, 00:14:50.301 "data_size": 65536 00:14:50.301 }, 00:14:50.301 { 00:14:50.301 "name": "BaseBdev3", 00:14:50.301 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:50.301 "is_configured": true, 00:14:50.301 "data_offset": 0, 00:14:50.301 "data_size": 65536 00:14:50.301 }, 00:14:50.301 { 00:14:50.301 "name": "BaseBdev4", 00:14:50.301 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:50.301 "is_configured": true, 00:14:50.301 "data_offset": 0, 00:14:50.301 "data_size": 65536 00:14:50.301 } 00:14:50.301 ] 00:14:50.301 }' 00:14:50.301 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.560 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.560 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.560 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.560 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.498 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.498 "name": "raid_bdev1", 00:14:51.498 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:51.498 "strip_size_kb": 64, 00:14:51.498 "state": "online", 00:14:51.498 "raid_level": "raid5f", 00:14:51.498 "superblock": false, 00:14:51.498 "num_base_bdevs": 4, 00:14:51.498 "num_base_bdevs_discovered": 4, 00:14:51.498 "num_base_bdevs_operational": 4, 00:14:51.498 "process": { 00:14:51.498 "type": "rebuild", 00:14:51.498 "target": "spare", 00:14:51.498 "progress": { 00:14:51.498 "blocks": 65280, 00:14:51.498 "percent": 33 00:14:51.498 } 00:14:51.498 }, 00:14:51.498 "base_bdevs_list": [ 00:14:51.498 { 00:14:51.498 "name": "spare", 00:14:51.498 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:51.498 "is_configured": true, 00:14:51.498 "data_offset": 0, 00:14:51.498 "data_size": 65536 00:14:51.498 }, 00:14:51.498 { 00:14:51.498 "name": "BaseBdev2", 00:14:51.498 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:51.498 "is_configured": true, 00:14:51.498 "data_offset": 0, 00:14:51.498 "data_size": 65536 00:14:51.499 }, 00:14:51.499 { 00:14:51.499 "name": "BaseBdev3", 00:14:51.499 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:51.499 "is_configured": true, 00:14:51.499 "data_offset": 0, 00:14:51.499 "data_size": 65536 00:14:51.499 }, 00:14:51.499 { 00:14:51.499 "name": "BaseBdev4", 00:14:51.499 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:51.499 "is_configured": true, 00:14:51.499 "data_offset": 0, 00:14:51.499 "data_size": 65536 00:14:51.499 } 00:14:51.499 ] 00:14:51.499 }' 00:14:51.499 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.499 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.499 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.758 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.758 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.698 "name": "raid_bdev1", 00:14:52.698 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:52.698 "strip_size_kb": 64, 00:14:52.698 "state": "online", 00:14:52.698 "raid_level": "raid5f", 00:14:52.698 "superblock": false, 00:14:52.698 "num_base_bdevs": 4, 00:14:52.698 "num_base_bdevs_discovered": 4, 00:14:52.698 "num_base_bdevs_operational": 4, 00:14:52.698 "process": { 00:14:52.698 "type": "rebuild", 00:14:52.698 "target": "spare", 00:14:52.698 "progress": { 00:14:52.698 "blocks": 86400, 00:14:52.698 "percent": 43 00:14:52.698 } 00:14:52.698 }, 00:14:52.698 "base_bdevs_list": [ 00:14:52.698 { 00:14:52.698 "name": "spare", 00:14:52.698 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:52.698 "is_configured": true, 00:14:52.698 "data_offset": 0, 00:14:52.698 "data_size": 65536 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "name": "BaseBdev2", 00:14:52.698 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:52.698 "is_configured": true, 00:14:52.698 "data_offset": 0, 00:14:52.698 "data_size": 65536 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "name": "BaseBdev3", 00:14:52.698 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:52.698 "is_configured": true, 00:14:52.698 "data_offset": 0, 00:14:52.698 "data_size": 65536 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "name": "BaseBdev4", 00:14:52.698 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:52.698 "is_configured": true, 00:14:52.698 "data_offset": 0, 00:14:52.698 "data_size": 65536 00:14:52.698 } 00:14:52.698 ] 00:14:52.698 }' 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.698 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.080 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.080 "name": "raid_bdev1", 00:14:54.080 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:54.080 "strip_size_kb": 64, 00:14:54.080 "state": "online", 00:14:54.080 "raid_level": "raid5f", 00:14:54.080 "superblock": false, 00:14:54.080 "num_base_bdevs": 4, 00:14:54.080 "num_base_bdevs_discovered": 4, 00:14:54.080 "num_base_bdevs_operational": 4, 00:14:54.080 "process": { 00:14:54.080 "type": "rebuild", 00:14:54.080 "target": "spare", 00:14:54.080 "progress": { 00:14:54.080 "blocks": 109440, 00:14:54.080 "percent": 55 00:14:54.080 } 00:14:54.080 }, 00:14:54.080 "base_bdevs_list": [ 00:14:54.080 { 00:14:54.080 "name": "spare", 00:14:54.080 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:54.080 "is_configured": true, 00:14:54.080 "data_offset": 0, 00:14:54.080 "data_size": 65536 00:14:54.080 }, 00:14:54.080 { 00:14:54.080 "name": "BaseBdev2", 00:14:54.080 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:54.080 "is_configured": true, 00:14:54.080 "data_offset": 0, 00:14:54.080 "data_size": 65536 00:14:54.080 }, 00:14:54.080 { 00:14:54.080 "name": "BaseBdev3", 00:14:54.080 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:54.080 "is_configured": true, 00:14:54.080 "data_offset": 0, 00:14:54.080 "data_size": 65536 00:14:54.080 }, 00:14:54.080 { 00:14:54.080 "name": "BaseBdev4", 00:14:54.080 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:54.080 "is_configured": true, 00:14:54.080 "data_offset": 0, 00:14:54.081 "data_size": 65536 00:14:54.081 } 00:14:54.081 ] 00:14:54.081 }' 00:14:54.081 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.081 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.081 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.081 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.081 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.020 "name": "raid_bdev1", 00:14:55.020 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:55.020 "strip_size_kb": 64, 00:14:55.020 "state": "online", 00:14:55.020 "raid_level": "raid5f", 00:14:55.020 "superblock": false, 00:14:55.020 "num_base_bdevs": 4, 00:14:55.020 "num_base_bdevs_discovered": 4, 00:14:55.020 "num_base_bdevs_operational": 4, 00:14:55.020 "process": { 00:14:55.020 "type": "rebuild", 00:14:55.020 "target": "spare", 00:14:55.020 "progress": { 00:14:55.020 "blocks": 130560, 00:14:55.020 "percent": 66 00:14:55.020 } 00:14:55.020 }, 00:14:55.020 "base_bdevs_list": [ 00:14:55.020 { 00:14:55.020 "name": "spare", 00:14:55.020 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:55.020 "is_configured": true, 00:14:55.020 "data_offset": 0, 00:14:55.020 "data_size": 65536 00:14:55.020 }, 00:14:55.020 { 00:14:55.020 "name": "BaseBdev2", 00:14:55.020 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:55.020 "is_configured": true, 00:14:55.020 "data_offset": 0, 00:14:55.020 "data_size": 65536 00:14:55.020 }, 00:14:55.020 { 00:14:55.020 "name": "BaseBdev3", 00:14:55.020 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:55.020 "is_configured": true, 00:14:55.020 "data_offset": 0, 00:14:55.020 "data_size": 65536 00:14:55.020 }, 00:14:55.020 { 00:14:55.020 "name": "BaseBdev4", 00:14:55.020 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:55.020 "is_configured": true, 00:14:55.020 "data_offset": 0, 00:14:55.020 "data_size": 65536 00:14:55.020 } 00:14:55.020 ] 00:14:55.020 }' 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.020 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.402 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.402 "name": "raid_bdev1", 00:14:56.402 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:56.402 "strip_size_kb": 64, 00:14:56.402 "state": "online", 00:14:56.402 "raid_level": "raid5f", 00:14:56.402 "superblock": false, 00:14:56.402 "num_base_bdevs": 4, 00:14:56.402 "num_base_bdevs_discovered": 4, 00:14:56.402 "num_base_bdevs_operational": 4, 00:14:56.402 "process": { 00:14:56.402 "type": "rebuild", 00:14:56.402 "target": "spare", 00:14:56.402 "progress": { 00:14:56.402 "blocks": 153600, 00:14:56.402 "percent": 78 00:14:56.402 } 00:14:56.402 }, 00:14:56.402 "base_bdevs_list": [ 00:14:56.402 { 00:14:56.402 "name": "spare", 00:14:56.402 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:56.402 "is_configured": true, 00:14:56.402 "data_offset": 0, 00:14:56.402 "data_size": 65536 00:14:56.402 }, 00:14:56.402 { 00:14:56.402 "name": "BaseBdev2", 00:14:56.402 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:56.402 "is_configured": true, 00:14:56.402 "data_offset": 0, 00:14:56.402 "data_size": 65536 00:14:56.402 }, 00:14:56.402 { 00:14:56.402 "name": "BaseBdev3", 00:14:56.402 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:56.403 "is_configured": true, 00:14:56.403 "data_offset": 0, 00:14:56.403 "data_size": 65536 00:14:56.403 }, 00:14:56.403 { 00:14:56.403 "name": "BaseBdev4", 00:14:56.403 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:56.403 "is_configured": true, 00:14:56.403 "data_offset": 0, 00:14:56.403 "data_size": 65536 00:14:56.403 } 00:14:56.403 ] 00:14:56.403 }' 00:14:56.403 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.403 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.403 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.403 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.403 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.344 "name": "raid_bdev1", 00:14:57.344 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:57.344 "strip_size_kb": 64, 00:14:57.344 "state": "online", 00:14:57.344 "raid_level": "raid5f", 00:14:57.344 "superblock": false, 00:14:57.344 "num_base_bdevs": 4, 00:14:57.344 "num_base_bdevs_discovered": 4, 00:14:57.344 "num_base_bdevs_operational": 4, 00:14:57.344 "process": { 00:14:57.344 "type": "rebuild", 00:14:57.344 "target": "spare", 00:14:57.344 "progress": { 00:14:57.344 "blocks": 174720, 00:14:57.344 "percent": 88 00:14:57.344 } 00:14:57.344 }, 00:14:57.344 "base_bdevs_list": [ 00:14:57.344 { 00:14:57.344 "name": "spare", 00:14:57.344 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:57.344 "is_configured": true, 00:14:57.344 "data_offset": 0, 00:14:57.344 "data_size": 65536 00:14:57.344 }, 00:14:57.344 { 00:14:57.344 "name": "BaseBdev2", 00:14:57.344 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:57.344 "is_configured": true, 00:14:57.344 "data_offset": 0, 00:14:57.344 "data_size": 65536 00:14:57.344 }, 00:14:57.344 { 00:14:57.344 "name": "BaseBdev3", 00:14:57.344 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:57.344 "is_configured": true, 00:14:57.344 "data_offset": 0, 00:14:57.344 "data_size": 65536 00:14:57.344 }, 00:14:57.344 { 00:14:57.344 "name": "BaseBdev4", 00:14:57.344 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:57.344 "is_configured": true, 00:14:57.344 "data_offset": 0, 00:14:57.344 "data_size": 65536 00:14:57.344 } 00:14:57.344 ] 00:14:57.344 }' 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.344 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.726 [2024-11-26 12:58:15.967688] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:58.726 [2024-11-26 12:58:15.967785] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:58.726 [2024-11-26 12:58:15.967827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.726 12:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.726 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.726 "name": "raid_bdev1", 00:14:58.726 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:58.726 "strip_size_kb": 64, 00:14:58.726 "state": "online", 00:14:58.726 "raid_level": "raid5f", 00:14:58.726 "superblock": false, 00:14:58.726 "num_base_bdevs": 4, 00:14:58.726 "num_base_bdevs_discovered": 4, 00:14:58.726 "num_base_bdevs_operational": 4, 00:14:58.726 "base_bdevs_list": [ 00:14:58.726 { 00:14:58.726 "name": "spare", 00:14:58.726 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:58.726 "is_configured": true, 00:14:58.726 "data_offset": 0, 00:14:58.726 "data_size": 65536 00:14:58.726 }, 00:14:58.726 { 00:14:58.726 "name": "BaseBdev2", 00:14:58.726 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:58.726 "is_configured": true, 00:14:58.726 "data_offset": 0, 00:14:58.726 "data_size": 65536 00:14:58.726 }, 00:14:58.726 { 00:14:58.726 "name": "BaseBdev3", 00:14:58.726 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:58.726 "is_configured": true, 00:14:58.726 "data_offset": 0, 00:14:58.726 "data_size": 65536 00:14:58.726 }, 00:14:58.726 { 00:14:58.726 "name": "BaseBdev4", 00:14:58.726 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:58.726 "is_configured": true, 00:14:58.726 "data_offset": 0, 00:14:58.726 "data_size": 65536 00:14:58.726 } 00:14:58.726 ] 00:14:58.726 }' 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.727 "name": "raid_bdev1", 00:14:58.727 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:58.727 "strip_size_kb": 64, 00:14:58.727 "state": "online", 00:14:58.727 "raid_level": "raid5f", 00:14:58.727 "superblock": false, 00:14:58.727 "num_base_bdevs": 4, 00:14:58.727 "num_base_bdevs_discovered": 4, 00:14:58.727 "num_base_bdevs_operational": 4, 00:14:58.727 "base_bdevs_list": [ 00:14:58.727 { 00:14:58.727 "name": "spare", 00:14:58.727 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:58.727 "is_configured": true, 00:14:58.727 "data_offset": 0, 00:14:58.727 "data_size": 65536 00:14:58.727 }, 00:14:58.727 { 00:14:58.727 "name": "BaseBdev2", 00:14:58.727 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:58.727 "is_configured": true, 00:14:58.727 "data_offset": 0, 00:14:58.727 "data_size": 65536 00:14:58.727 }, 00:14:58.727 { 00:14:58.727 "name": "BaseBdev3", 00:14:58.727 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:58.727 "is_configured": true, 00:14:58.727 "data_offset": 0, 00:14:58.727 "data_size": 65536 00:14:58.727 }, 00:14:58.727 { 00:14:58.727 "name": "BaseBdev4", 00:14:58.727 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:58.727 "is_configured": true, 00:14:58.727 "data_offset": 0, 00:14:58.727 "data_size": 65536 00:14:58.727 } 00:14:58.727 ] 00:14:58.727 }' 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.727 "name": "raid_bdev1", 00:14:58.727 "uuid": "aa5e25bd-e55a-45b7-bc98-cde0bed6f97d", 00:14:58.727 "strip_size_kb": 64, 00:14:58.727 "state": "online", 00:14:58.727 "raid_level": "raid5f", 00:14:58.727 "superblock": false, 00:14:58.727 "num_base_bdevs": 4, 00:14:58.727 "num_base_bdevs_discovered": 4, 00:14:58.727 "num_base_bdevs_operational": 4, 00:14:58.727 "base_bdevs_list": [ 00:14:58.727 { 00:14:58.727 "name": "spare", 00:14:58.727 "uuid": "0184a29d-ab82-5fb9-9bea-7ba2a13091f3", 00:14:58.727 "is_configured": true, 00:14:58.727 "data_offset": 0, 00:14:58.727 "data_size": 65536 00:14:58.727 }, 00:14:58.727 { 00:14:58.727 "name": "BaseBdev2", 00:14:58.727 "uuid": "496c8b25-edc5-58f0-9139-709a5482a6e0", 00:14:58.727 "is_configured": true, 00:14:58.727 "data_offset": 0, 00:14:58.727 "data_size": 65536 00:14:58.727 }, 00:14:58.727 { 00:14:58.727 "name": "BaseBdev3", 00:14:58.727 "uuid": "8894e266-7c38-5536-b68b-01e324c52ff3", 00:14:58.727 "is_configured": true, 00:14:58.727 "data_offset": 0, 00:14:58.727 "data_size": 65536 00:14:58.727 }, 00:14:58.727 { 00:14:58.727 "name": "BaseBdev4", 00:14:58.727 "uuid": "95b40acb-8004-5368-93b2-c39169de3ebb", 00:14:58.727 "is_configured": true, 00:14:58.727 "data_offset": 0, 00:14:58.727 "data_size": 65536 00:14:58.727 } 00:14:58.727 ] 00:14:58.727 }' 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.727 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.296 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:59.296 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.296 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.296 [2024-11-26 12:58:16.699769] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.296 [2024-11-26 12:58:16.699797] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.296 [2024-11-26 12:58:16.699882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.296 [2024-11-26 12:58:16.699967] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.296 [2024-11-26 12:58:16.699981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:59.296 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.297 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:59.297 /dev/nbd0 00:14:59.556 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:59.556 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:59.556 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:59.556 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:59.556 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:59.556 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:59.556 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:59.557 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:59.557 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:59.557 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:59.557 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.557 1+0 records in 00:14:59.557 1+0 records out 00:14:59.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496826 s, 8.2 MB/s 00:14:59.557 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.557 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:59.557 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.557 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:59.557 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:59.557 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.557 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.557 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:59.557 /dev/nbd1 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.816 1+0 records in 00:14:59.816 1+0 records out 00:14:59.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358514 s, 11.4 MB/s 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:59.816 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.817 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:59.817 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.817 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:00.076 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.076 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.076 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.076 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.076 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.076 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.076 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:00.076 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.076 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.076 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95252 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95252 ']' 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95252 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95252 00:15:00.336 killing process with pid 95252 00:15:00.336 Received shutdown signal, test time was about 60.000000 seconds 00:15:00.336 00:15:00.336 Latency(us) 00:15:00.336 [2024-11-26T12:58:18.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.336 [2024-11-26T12:58:18.020Z] =================================================================================================================== 00:15:00.336 [2024-11-26T12:58:18.020Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95252' 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95252 00:15:00.336 [2024-11-26 12:58:17.832922] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.336 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95252 00:15:00.336 [2024-11-26 12:58:17.884047] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:00.597 00:15:00.597 real 0m17.496s 00:15:00.597 user 0m21.228s 00:15:00.597 sys 0m2.488s 00:15:00.597 ************************************ 00:15:00.597 END TEST raid5f_rebuild_test 00:15:00.597 ************************************ 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.597 12:58:18 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:00.597 12:58:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:00.597 12:58:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.597 12:58:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.597 ************************************ 00:15:00.597 START TEST raid5f_rebuild_test_sb 00:15:00.597 ************************************ 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95738 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95738 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95738 ']' 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.597 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.858 [2024-11-26 12:58:18.310966] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:00.858 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:00.858 Zero copy mechanism will not be used. 00:15:00.858 [2024-11-26 12:58:18.311202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95738 ] 00:15:00.858 [2024-11-26 12:58:18.477517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.858 [2024-11-26 12:58:18.523994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.118 [2024-11-26 12:58:18.566842] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.118 [2024-11-26 12:58:18.566957] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.687 BaseBdev1_malloc 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.687 [2024-11-26 12:58:19.152769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:01.687 [2024-11-26 12:58:19.152843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.687 [2024-11-26 12:58:19.152869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:01.687 [2024-11-26 12:58:19.152884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.687 [2024-11-26 12:58:19.154902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.687 [2024-11-26 12:58:19.154946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:01.687 BaseBdev1 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.687 BaseBdev2_malloc 00:15:01.687 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.688 [2024-11-26 12:58:19.196996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:01.688 [2024-11-26 12:58:19.197106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.688 [2024-11-26 12:58:19.197152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:01.688 [2024-11-26 12:58:19.197206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.688 [2024-11-26 12:58:19.201939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.688 [2024-11-26 12:58:19.202013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:01.688 BaseBdev2 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.688 BaseBdev3_malloc 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.688 [2024-11-26 12:58:19.228287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:01.688 [2024-11-26 12:58:19.228340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.688 [2024-11-26 12:58:19.228363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:01.688 [2024-11-26 12:58:19.228372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.688 [2024-11-26 12:58:19.230359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.688 [2024-11-26 12:58:19.230395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:01.688 BaseBdev3 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.688 BaseBdev4_malloc 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.688 [2024-11-26 12:58:19.256896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:01.688 [2024-11-26 12:58:19.257017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.688 [2024-11-26 12:58:19.257062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:01.688 [2024-11-26 12:58:19.257070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.688 [2024-11-26 12:58:19.259056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.688 [2024-11-26 12:58:19.259093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:01.688 BaseBdev4 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.688 spare_malloc 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.688 spare_delay 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.688 [2024-11-26 12:58:19.297409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:01.688 [2024-11-26 12:58:19.297462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.688 [2024-11-26 12:58:19.297483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:01.688 [2024-11-26 12:58:19.297492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.688 [2024-11-26 12:58:19.299532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.688 [2024-11-26 12:58:19.299621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:01.688 spare 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.688 [2024-11-26 12:58:19.309479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.688 [2024-11-26 12:58:19.311278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.688 [2024-11-26 12:58:19.311340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.688 [2024-11-26 12:58:19.311378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:01.688 [2024-11-26 12:58:19.311538] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:01.688 [2024-11-26 12:58:19.311550] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:01.688 [2024-11-26 12:58:19.311818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:01.688 [2024-11-26 12:58:19.312275] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:01.688 [2024-11-26 12:58:19.312294] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:01.688 [2024-11-26 12:58:19.312427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.688 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.947 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.947 "name": "raid_bdev1", 00:15:01.947 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:01.947 "strip_size_kb": 64, 00:15:01.947 "state": "online", 00:15:01.947 "raid_level": "raid5f", 00:15:01.947 "superblock": true, 00:15:01.947 "num_base_bdevs": 4, 00:15:01.947 "num_base_bdevs_discovered": 4, 00:15:01.947 "num_base_bdevs_operational": 4, 00:15:01.947 "base_bdevs_list": [ 00:15:01.947 { 00:15:01.947 "name": "BaseBdev1", 00:15:01.947 "uuid": "02ab9f36-41aa-536e-8b7e-158e02c0c14b", 00:15:01.947 "is_configured": true, 00:15:01.947 "data_offset": 2048, 00:15:01.947 "data_size": 63488 00:15:01.947 }, 00:15:01.947 { 00:15:01.947 "name": "BaseBdev2", 00:15:01.947 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:01.947 "is_configured": true, 00:15:01.947 "data_offset": 2048, 00:15:01.947 "data_size": 63488 00:15:01.947 }, 00:15:01.947 { 00:15:01.947 "name": "BaseBdev3", 00:15:01.947 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:01.947 "is_configured": true, 00:15:01.947 "data_offset": 2048, 00:15:01.947 "data_size": 63488 00:15:01.947 }, 00:15:01.947 { 00:15:01.947 "name": "BaseBdev4", 00:15:01.947 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:01.947 "is_configured": true, 00:15:01.947 "data_offset": 2048, 00:15:01.947 "data_size": 63488 00:15:01.947 } 00:15:01.947 ] 00:15:01.947 }' 00:15:01.947 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.947 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.207 [2024-11-26 12:58:19.785465] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.207 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:02.467 [2024-11-26 12:58:20.041010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:02.467 /dev/nbd0 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.467 1+0 records in 00:15:02.467 1+0 records out 00:15:02.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326255 s, 12.6 MB/s 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:02.467 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:03.036 496+0 records in 00:15:03.036 496+0 records out 00:15:03.036 97517568 bytes (98 MB, 93 MiB) copied, 0.382085 s, 255 MB/s 00:15:03.036 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:03.036 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.036 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:03.036 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.036 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:03.036 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.036 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:03.295 [2024-11-26 12:58:20.728245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.295 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:03.295 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:03.295 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:03.295 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:03.295 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:03.295 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:03.295 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.296 [2024-11-26 12:58:20.755858] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.296 "name": "raid_bdev1", 00:15:03.296 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:03.296 "strip_size_kb": 64, 00:15:03.296 "state": "online", 00:15:03.296 "raid_level": "raid5f", 00:15:03.296 "superblock": true, 00:15:03.296 "num_base_bdevs": 4, 00:15:03.296 "num_base_bdevs_discovered": 3, 00:15:03.296 "num_base_bdevs_operational": 3, 00:15:03.296 "base_bdevs_list": [ 00:15:03.296 { 00:15:03.296 "name": null, 00:15:03.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.296 "is_configured": false, 00:15:03.296 "data_offset": 0, 00:15:03.296 "data_size": 63488 00:15:03.296 }, 00:15:03.296 { 00:15:03.296 "name": "BaseBdev2", 00:15:03.296 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:03.296 "is_configured": true, 00:15:03.296 "data_offset": 2048, 00:15:03.296 "data_size": 63488 00:15:03.296 }, 00:15:03.296 { 00:15:03.296 "name": "BaseBdev3", 00:15:03.296 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:03.296 "is_configured": true, 00:15:03.296 "data_offset": 2048, 00:15:03.296 "data_size": 63488 00:15:03.296 }, 00:15:03.296 { 00:15:03.296 "name": "BaseBdev4", 00:15:03.296 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:03.296 "is_configured": true, 00:15:03.296 "data_offset": 2048, 00:15:03.296 "data_size": 63488 00:15:03.296 } 00:15:03.296 ] 00:15:03.296 }' 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.296 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.555 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.555 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.555 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.555 [2024-11-26 12:58:21.191463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.555 [2024-11-26 12:58:21.194892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:15:03.555 [2024-11-26 12:58:21.197138] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:03.555 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.555 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.937 "name": "raid_bdev1", 00:15:04.937 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:04.937 "strip_size_kb": 64, 00:15:04.937 "state": "online", 00:15:04.937 "raid_level": "raid5f", 00:15:04.937 "superblock": true, 00:15:04.937 "num_base_bdevs": 4, 00:15:04.937 "num_base_bdevs_discovered": 4, 00:15:04.937 "num_base_bdevs_operational": 4, 00:15:04.937 "process": { 00:15:04.937 "type": "rebuild", 00:15:04.937 "target": "spare", 00:15:04.937 "progress": { 00:15:04.937 "blocks": 19200, 00:15:04.937 "percent": 10 00:15:04.937 } 00:15:04.937 }, 00:15:04.937 "base_bdevs_list": [ 00:15:04.937 { 00:15:04.937 "name": "spare", 00:15:04.937 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:04.937 "is_configured": true, 00:15:04.937 "data_offset": 2048, 00:15:04.937 "data_size": 63488 00:15:04.937 }, 00:15:04.937 { 00:15:04.937 "name": "BaseBdev2", 00:15:04.937 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:04.937 "is_configured": true, 00:15:04.937 "data_offset": 2048, 00:15:04.937 "data_size": 63488 00:15:04.937 }, 00:15:04.937 { 00:15:04.937 "name": "BaseBdev3", 00:15:04.937 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:04.937 "is_configured": true, 00:15:04.937 "data_offset": 2048, 00:15:04.937 "data_size": 63488 00:15:04.937 }, 00:15:04.937 { 00:15:04.937 "name": "BaseBdev4", 00:15:04.937 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:04.937 "is_configured": true, 00:15:04.937 "data_offset": 2048, 00:15:04.937 "data_size": 63488 00:15:04.937 } 00:15:04.937 ] 00:15:04.937 }' 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.937 [2024-11-26 12:58:22.359783] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.937 [2024-11-26 12:58:22.402449] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:04.937 [2024-11-26 12:58:22.402517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.937 [2024-11-26 12:58:22.402535] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.937 [2024-11-26 12:58:22.402544] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.937 "name": "raid_bdev1", 00:15:04.937 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:04.937 "strip_size_kb": 64, 00:15:04.937 "state": "online", 00:15:04.937 "raid_level": "raid5f", 00:15:04.937 "superblock": true, 00:15:04.937 "num_base_bdevs": 4, 00:15:04.937 "num_base_bdevs_discovered": 3, 00:15:04.937 "num_base_bdevs_operational": 3, 00:15:04.937 "base_bdevs_list": [ 00:15:04.937 { 00:15:04.937 "name": null, 00:15:04.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.937 "is_configured": false, 00:15:04.937 "data_offset": 0, 00:15:04.937 "data_size": 63488 00:15:04.937 }, 00:15:04.937 { 00:15:04.937 "name": "BaseBdev2", 00:15:04.937 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:04.937 "is_configured": true, 00:15:04.937 "data_offset": 2048, 00:15:04.937 "data_size": 63488 00:15:04.937 }, 00:15:04.937 { 00:15:04.937 "name": "BaseBdev3", 00:15:04.937 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:04.937 "is_configured": true, 00:15:04.937 "data_offset": 2048, 00:15:04.937 "data_size": 63488 00:15:04.937 }, 00:15:04.937 { 00:15:04.937 "name": "BaseBdev4", 00:15:04.937 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:04.937 "is_configured": true, 00:15:04.937 "data_offset": 2048, 00:15:04.937 "data_size": 63488 00:15:04.937 } 00:15:04.937 ] 00:15:04.937 }' 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.937 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.197 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.197 "name": "raid_bdev1", 00:15:05.198 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:05.198 "strip_size_kb": 64, 00:15:05.198 "state": "online", 00:15:05.198 "raid_level": "raid5f", 00:15:05.198 "superblock": true, 00:15:05.198 "num_base_bdevs": 4, 00:15:05.198 "num_base_bdevs_discovered": 3, 00:15:05.198 "num_base_bdevs_operational": 3, 00:15:05.198 "base_bdevs_list": [ 00:15:05.198 { 00:15:05.198 "name": null, 00:15:05.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.198 "is_configured": false, 00:15:05.198 "data_offset": 0, 00:15:05.198 "data_size": 63488 00:15:05.198 }, 00:15:05.198 { 00:15:05.198 "name": "BaseBdev2", 00:15:05.198 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:05.198 "is_configured": true, 00:15:05.198 "data_offset": 2048, 00:15:05.198 "data_size": 63488 00:15:05.198 }, 00:15:05.198 { 00:15:05.198 "name": "BaseBdev3", 00:15:05.198 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:05.198 "is_configured": true, 00:15:05.198 "data_offset": 2048, 00:15:05.198 "data_size": 63488 00:15:05.198 }, 00:15:05.198 { 00:15:05.198 "name": "BaseBdev4", 00:15:05.198 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:05.198 "is_configured": true, 00:15:05.198 "data_offset": 2048, 00:15:05.198 "data_size": 63488 00:15:05.198 } 00:15:05.198 ] 00:15:05.198 }' 00:15:05.198 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.457 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.457 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.457 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.457 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.457 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.457 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.457 [2024-11-26 12:58:22.950867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.457 [2024-11-26 12:58:22.953773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:15:05.457 [2024-11-26 12:58:22.956005] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:05.457 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.457 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:06.395 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.395 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.395 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.395 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.395 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.395 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.395 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.395 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.395 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.395 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.395 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.395 "name": "raid_bdev1", 00:15:06.395 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:06.395 "strip_size_kb": 64, 00:15:06.395 "state": "online", 00:15:06.395 "raid_level": "raid5f", 00:15:06.395 "superblock": true, 00:15:06.395 "num_base_bdevs": 4, 00:15:06.395 "num_base_bdevs_discovered": 4, 00:15:06.395 "num_base_bdevs_operational": 4, 00:15:06.395 "process": { 00:15:06.395 "type": "rebuild", 00:15:06.395 "target": "spare", 00:15:06.395 "progress": { 00:15:06.395 "blocks": 19200, 00:15:06.395 "percent": 10 00:15:06.395 } 00:15:06.395 }, 00:15:06.395 "base_bdevs_list": [ 00:15:06.395 { 00:15:06.395 "name": "spare", 00:15:06.395 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:06.395 "is_configured": true, 00:15:06.395 "data_offset": 2048, 00:15:06.395 "data_size": 63488 00:15:06.395 }, 00:15:06.395 { 00:15:06.395 "name": "BaseBdev2", 00:15:06.395 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:06.395 "is_configured": true, 00:15:06.395 "data_offset": 2048, 00:15:06.395 "data_size": 63488 00:15:06.395 }, 00:15:06.395 { 00:15:06.395 "name": "BaseBdev3", 00:15:06.395 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:06.395 "is_configured": true, 00:15:06.395 "data_offset": 2048, 00:15:06.395 "data_size": 63488 00:15:06.395 }, 00:15:06.395 { 00:15:06.395 "name": "BaseBdev4", 00:15:06.395 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:06.395 "is_configured": true, 00:15:06.396 "data_offset": 2048, 00:15:06.396 "data_size": 63488 00:15:06.396 } 00:15:06.396 ] 00:15:06.396 }' 00:15:06.396 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.396 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.396 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:06.654 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=528 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.654 "name": "raid_bdev1", 00:15:06.654 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:06.654 "strip_size_kb": 64, 00:15:06.654 "state": "online", 00:15:06.654 "raid_level": "raid5f", 00:15:06.654 "superblock": true, 00:15:06.654 "num_base_bdevs": 4, 00:15:06.654 "num_base_bdevs_discovered": 4, 00:15:06.654 "num_base_bdevs_operational": 4, 00:15:06.654 "process": { 00:15:06.654 "type": "rebuild", 00:15:06.654 "target": "spare", 00:15:06.654 "progress": { 00:15:06.654 "blocks": 21120, 00:15:06.654 "percent": 11 00:15:06.654 } 00:15:06.654 }, 00:15:06.654 "base_bdevs_list": [ 00:15:06.654 { 00:15:06.654 "name": "spare", 00:15:06.654 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:06.654 "is_configured": true, 00:15:06.654 "data_offset": 2048, 00:15:06.654 "data_size": 63488 00:15:06.654 }, 00:15:06.654 { 00:15:06.654 "name": "BaseBdev2", 00:15:06.654 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:06.654 "is_configured": true, 00:15:06.654 "data_offset": 2048, 00:15:06.654 "data_size": 63488 00:15:06.654 }, 00:15:06.654 { 00:15:06.654 "name": "BaseBdev3", 00:15:06.654 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:06.654 "is_configured": true, 00:15:06.654 "data_offset": 2048, 00:15:06.654 "data_size": 63488 00:15:06.654 }, 00:15:06.654 { 00:15:06.654 "name": "BaseBdev4", 00:15:06.654 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:06.654 "is_configured": true, 00:15:06.654 "data_offset": 2048, 00:15:06.654 "data_size": 63488 00:15:06.654 } 00:15:06.654 ] 00:15:06.654 }' 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.654 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.592 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.592 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.592 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.592 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.592 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.592 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.592 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.592 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.592 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.592 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.851 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.851 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.851 "name": "raid_bdev1", 00:15:07.851 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:07.851 "strip_size_kb": 64, 00:15:07.851 "state": "online", 00:15:07.851 "raid_level": "raid5f", 00:15:07.851 "superblock": true, 00:15:07.851 "num_base_bdevs": 4, 00:15:07.851 "num_base_bdevs_discovered": 4, 00:15:07.851 "num_base_bdevs_operational": 4, 00:15:07.851 "process": { 00:15:07.851 "type": "rebuild", 00:15:07.851 "target": "spare", 00:15:07.851 "progress": { 00:15:07.851 "blocks": 42240, 00:15:07.851 "percent": 22 00:15:07.851 } 00:15:07.851 }, 00:15:07.851 "base_bdevs_list": [ 00:15:07.851 { 00:15:07.851 "name": "spare", 00:15:07.851 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:07.851 "is_configured": true, 00:15:07.851 "data_offset": 2048, 00:15:07.851 "data_size": 63488 00:15:07.851 }, 00:15:07.851 { 00:15:07.851 "name": "BaseBdev2", 00:15:07.851 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:07.851 "is_configured": true, 00:15:07.851 "data_offset": 2048, 00:15:07.851 "data_size": 63488 00:15:07.852 }, 00:15:07.852 { 00:15:07.852 "name": "BaseBdev3", 00:15:07.852 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:07.852 "is_configured": true, 00:15:07.852 "data_offset": 2048, 00:15:07.852 "data_size": 63488 00:15:07.852 }, 00:15:07.852 { 00:15:07.852 "name": "BaseBdev4", 00:15:07.852 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:07.852 "is_configured": true, 00:15:07.852 "data_offset": 2048, 00:15:07.852 "data_size": 63488 00:15:07.852 } 00:15:07.852 ] 00:15:07.852 }' 00:15:07.852 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.852 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.852 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.852 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.852 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.826 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.826 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.826 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.826 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.826 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.827 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.827 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.827 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.827 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.827 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.827 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.827 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.827 "name": "raid_bdev1", 00:15:08.827 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:08.827 "strip_size_kb": 64, 00:15:08.827 "state": "online", 00:15:08.827 "raid_level": "raid5f", 00:15:08.827 "superblock": true, 00:15:08.827 "num_base_bdevs": 4, 00:15:08.827 "num_base_bdevs_discovered": 4, 00:15:08.827 "num_base_bdevs_operational": 4, 00:15:08.827 "process": { 00:15:08.827 "type": "rebuild", 00:15:08.827 "target": "spare", 00:15:08.827 "progress": { 00:15:08.827 "blocks": 65280, 00:15:08.827 "percent": 34 00:15:08.827 } 00:15:08.827 }, 00:15:08.827 "base_bdevs_list": [ 00:15:08.827 { 00:15:08.827 "name": "spare", 00:15:08.827 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:08.827 "is_configured": true, 00:15:08.827 "data_offset": 2048, 00:15:08.827 "data_size": 63488 00:15:08.827 }, 00:15:08.827 { 00:15:08.827 "name": "BaseBdev2", 00:15:08.827 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:08.827 "is_configured": true, 00:15:08.827 "data_offset": 2048, 00:15:08.827 "data_size": 63488 00:15:08.827 }, 00:15:08.827 { 00:15:08.827 "name": "BaseBdev3", 00:15:08.827 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:08.827 "is_configured": true, 00:15:08.827 "data_offset": 2048, 00:15:08.827 "data_size": 63488 00:15:08.827 }, 00:15:08.827 { 00:15:08.827 "name": "BaseBdev4", 00:15:08.827 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:08.827 "is_configured": true, 00:15:08.827 "data_offset": 2048, 00:15:08.827 "data_size": 63488 00:15:08.827 } 00:15:08.827 ] 00:15:08.827 }' 00:15:08.827 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.104 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.104 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.104 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.104 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.043 "name": "raid_bdev1", 00:15:10.043 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:10.043 "strip_size_kb": 64, 00:15:10.043 "state": "online", 00:15:10.043 "raid_level": "raid5f", 00:15:10.043 "superblock": true, 00:15:10.043 "num_base_bdevs": 4, 00:15:10.043 "num_base_bdevs_discovered": 4, 00:15:10.043 "num_base_bdevs_operational": 4, 00:15:10.043 "process": { 00:15:10.043 "type": "rebuild", 00:15:10.043 "target": "spare", 00:15:10.043 "progress": { 00:15:10.043 "blocks": 86400, 00:15:10.043 "percent": 45 00:15:10.043 } 00:15:10.043 }, 00:15:10.043 "base_bdevs_list": [ 00:15:10.043 { 00:15:10.043 "name": "spare", 00:15:10.043 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:10.043 "is_configured": true, 00:15:10.043 "data_offset": 2048, 00:15:10.043 "data_size": 63488 00:15:10.043 }, 00:15:10.043 { 00:15:10.043 "name": "BaseBdev2", 00:15:10.043 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:10.043 "is_configured": true, 00:15:10.043 "data_offset": 2048, 00:15:10.043 "data_size": 63488 00:15:10.043 }, 00:15:10.043 { 00:15:10.043 "name": "BaseBdev3", 00:15:10.043 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:10.043 "is_configured": true, 00:15:10.043 "data_offset": 2048, 00:15:10.043 "data_size": 63488 00:15:10.043 }, 00:15:10.043 { 00:15:10.043 "name": "BaseBdev4", 00:15:10.043 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:10.043 "is_configured": true, 00:15:10.043 "data_offset": 2048, 00:15:10.043 "data_size": 63488 00:15:10.043 } 00:15:10.043 ] 00:15:10.043 }' 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.043 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.423 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.423 "name": "raid_bdev1", 00:15:11.423 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:11.423 "strip_size_kb": 64, 00:15:11.423 "state": "online", 00:15:11.423 "raid_level": "raid5f", 00:15:11.423 "superblock": true, 00:15:11.423 "num_base_bdevs": 4, 00:15:11.423 "num_base_bdevs_discovered": 4, 00:15:11.423 "num_base_bdevs_operational": 4, 00:15:11.423 "process": { 00:15:11.423 "type": "rebuild", 00:15:11.423 "target": "spare", 00:15:11.423 "progress": { 00:15:11.423 "blocks": 109440, 00:15:11.423 "percent": 57 00:15:11.423 } 00:15:11.423 }, 00:15:11.423 "base_bdevs_list": [ 00:15:11.423 { 00:15:11.423 "name": "spare", 00:15:11.423 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:11.423 "is_configured": true, 00:15:11.423 "data_offset": 2048, 00:15:11.423 "data_size": 63488 00:15:11.423 }, 00:15:11.423 { 00:15:11.423 "name": "BaseBdev2", 00:15:11.423 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:11.423 "is_configured": true, 00:15:11.423 "data_offset": 2048, 00:15:11.423 "data_size": 63488 00:15:11.423 }, 00:15:11.423 { 00:15:11.423 "name": "BaseBdev3", 00:15:11.423 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:11.423 "is_configured": true, 00:15:11.424 "data_offset": 2048, 00:15:11.424 "data_size": 63488 00:15:11.424 }, 00:15:11.424 { 00:15:11.424 "name": "BaseBdev4", 00:15:11.424 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:11.424 "is_configured": true, 00:15:11.424 "data_offset": 2048, 00:15:11.424 "data_size": 63488 00:15:11.424 } 00:15:11.424 ] 00:15:11.424 }' 00:15:11.424 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.424 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.424 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.424 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.424 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.363 "name": "raid_bdev1", 00:15:12.363 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:12.363 "strip_size_kb": 64, 00:15:12.363 "state": "online", 00:15:12.363 "raid_level": "raid5f", 00:15:12.363 "superblock": true, 00:15:12.363 "num_base_bdevs": 4, 00:15:12.363 "num_base_bdevs_discovered": 4, 00:15:12.363 "num_base_bdevs_operational": 4, 00:15:12.363 "process": { 00:15:12.363 "type": "rebuild", 00:15:12.363 "target": "spare", 00:15:12.363 "progress": { 00:15:12.363 "blocks": 130560, 00:15:12.363 "percent": 68 00:15:12.363 } 00:15:12.363 }, 00:15:12.363 "base_bdevs_list": [ 00:15:12.363 { 00:15:12.363 "name": "spare", 00:15:12.363 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:12.363 "is_configured": true, 00:15:12.363 "data_offset": 2048, 00:15:12.363 "data_size": 63488 00:15:12.363 }, 00:15:12.363 { 00:15:12.363 "name": "BaseBdev2", 00:15:12.363 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:12.363 "is_configured": true, 00:15:12.363 "data_offset": 2048, 00:15:12.363 "data_size": 63488 00:15:12.363 }, 00:15:12.363 { 00:15:12.363 "name": "BaseBdev3", 00:15:12.363 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:12.363 "is_configured": true, 00:15:12.363 "data_offset": 2048, 00:15:12.363 "data_size": 63488 00:15:12.363 }, 00:15:12.363 { 00:15:12.363 "name": "BaseBdev4", 00:15:12.363 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:12.363 "is_configured": true, 00:15:12.363 "data_offset": 2048, 00:15:12.363 "data_size": 63488 00:15:12.363 } 00:15:12.363 ] 00:15:12.363 }' 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.363 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.364 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.364 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.364 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.747 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.747 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.747 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.747 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.747 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.747 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.747 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.747 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.747 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.747 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.747 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.747 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.747 "name": "raid_bdev1", 00:15:13.747 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:13.747 "strip_size_kb": 64, 00:15:13.747 "state": "online", 00:15:13.747 "raid_level": "raid5f", 00:15:13.747 "superblock": true, 00:15:13.747 "num_base_bdevs": 4, 00:15:13.747 "num_base_bdevs_discovered": 4, 00:15:13.747 "num_base_bdevs_operational": 4, 00:15:13.747 "process": { 00:15:13.747 "type": "rebuild", 00:15:13.747 "target": "spare", 00:15:13.747 "progress": { 00:15:13.747 "blocks": 153600, 00:15:13.747 "percent": 80 00:15:13.747 } 00:15:13.747 }, 00:15:13.747 "base_bdevs_list": [ 00:15:13.747 { 00:15:13.747 "name": "spare", 00:15:13.747 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:13.747 "is_configured": true, 00:15:13.747 "data_offset": 2048, 00:15:13.747 "data_size": 63488 00:15:13.747 }, 00:15:13.747 { 00:15:13.747 "name": "BaseBdev2", 00:15:13.747 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:13.747 "is_configured": true, 00:15:13.747 "data_offset": 2048, 00:15:13.747 "data_size": 63488 00:15:13.747 }, 00:15:13.747 { 00:15:13.747 "name": "BaseBdev3", 00:15:13.747 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:13.747 "is_configured": true, 00:15:13.747 "data_offset": 2048, 00:15:13.747 "data_size": 63488 00:15:13.747 }, 00:15:13.747 { 00:15:13.747 "name": "BaseBdev4", 00:15:13.747 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:13.747 "is_configured": true, 00:15:13.747 "data_offset": 2048, 00:15:13.747 "data_size": 63488 00:15:13.747 } 00:15:13.747 ] 00:15:13.747 }' 00:15:13.747 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.747 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.747 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.747 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.747 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.687 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.687 "name": "raid_bdev1", 00:15:14.687 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:14.687 "strip_size_kb": 64, 00:15:14.687 "state": "online", 00:15:14.687 "raid_level": "raid5f", 00:15:14.687 "superblock": true, 00:15:14.687 "num_base_bdevs": 4, 00:15:14.687 "num_base_bdevs_discovered": 4, 00:15:14.687 "num_base_bdevs_operational": 4, 00:15:14.687 "process": { 00:15:14.687 "type": "rebuild", 00:15:14.687 "target": "spare", 00:15:14.687 "progress": { 00:15:14.687 "blocks": 174720, 00:15:14.688 "percent": 91 00:15:14.688 } 00:15:14.688 }, 00:15:14.688 "base_bdevs_list": [ 00:15:14.688 { 00:15:14.688 "name": "spare", 00:15:14.688 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:14.688 "is_configured": true, 00:15:14.688 "data_offset": 2048, 00:15:14.688 "data_size": 63488 00:15:14.688 }, 00:15:14.688 { 00:15:14.688 "name": "BaseBdev2", 00:15:14.688 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:14.688 "is_configured": true, 00:15:14.688 "data_offset": 2048, 00:15:14.688 "data_size": 63488 00:15:14.688 }, 00:15:14.688 { 00:15:14.688 "name": "BaseBdev3", 00:15:14.688 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:14.688 "is_configured": true, 00:15:14.688 "data_offset": 2048, 00:15:14.688 "data_size": 63488 00:15:14.688 }, 00:15:14.688 { 00:15:14.688 "name": "BaseBdev4", 00:15:14.688 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:14.688 "is_configured": true, 00:15:14.688 "data_offset": 2048, 00:15:14.688 "data_size": 63488 00:15:14.688 } 00:15:14.688 ] 00:15:14.688 }' 00:15:14.688 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.688 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.688 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.688 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.688 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.627 [2024-11-26 12:58:32.995041] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:15.628 [2024-11-26 12:58:32.995157] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:15.628 [2024-11-26 12:58:32.995293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.888 "name": "raid_bdev1", 00:15:15.888 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:15.888 "strip_size_kb": 64, 00:15:15.888 "state": "online", 00:15:15.888 "raid_level": "raid5f", 00:15:15.888 "superblock": true, 00:15:15.888 "num_base_bdevs": 4, 00:15:15.888 "num_base_bdevs_discovered": 4, 00:15:15.888 "num_base_bdevs_operational": 4, 00:15:15.888 "base_bdevs_list": [ 00:15:15.888 { 00:15:15.888 "name": "spare", 00:15:15.888 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:15.888 "is_configured": true, 00:15:15.888 "data_offset": 2048, 00:15:15.888 "data_size": 63488 00:15:15.888 }, 00:15:15.888 { 00:15:15.888 "name": "BaseBdev2", 00:15:15.888 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:15.888 "is_configured": true, 00:15:15.888 "data_offset": 2048, 00:15:15.888 "data_size": 63488 00:15:15.888 }, 00:15:15.888 { 00:15:15.888 "name": "BaseBdev3", 00:15:15.888 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:15.888 "is_configured": true, 00:15:15.888 "data_offset": 2048, 00:15:15.888 "data_size": 63488 00:15:15.888 }, 00:15:15.888 { 00:15:15.888 "name": "BaseBdev4", 00:15:15.888 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:15.888 "is_configured": true, 00:15:15.888 "data_offset": 2048, 00:15:15.888 "data_size": 63488 00:15:15.888 } 00:15:15.888 ] 00:15:15.888 }' 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.888 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.888 "name": "raid_bdev1", 00:15:15.888 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:15.888 "strip_size_kb": 64, 00:15:15.888 "state": "online", 00:15:15.888 "raid_level": "raid5f", 00:15:15.888 "superblock": true, 00:15:15.888 "num_base_bdevs": 4, 00:15:15.888 "num_base_bdevs_discovered": 4, 00:15:15.888 "num_base_bdevs_operational": 4, 00:15:15.888 "base_bdevs_list": [ 00:15:15.888 { 00:15:15.888 "name": "spare", 00:15:15.888 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:15.888 "is_configured": true, 00:15:15.888 "data_offset": 2048, 00:15:15.888 "data_size": 63488 00:15:15.888 }, 00:15:15.888 { 00:15:15.888 "name": "BaseBdev2", 00:15:15.888 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:15.888 "is_configured": true, 00:15:15.888 "data_offset": 2048, 00:15:15.888 "data_size": 63488 00:15:15.888 }, 00:15:15.888 { 00:15:15.888 "name": "BaseBdev3", 00:15:15.888 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:15.888 "is_configured": true, 00:15:15.888 "data_offset": 2048, 00:15:15.888 "data_size": 63488 00:15:15.888 }, 00:15:15.888 { 00:15:15.888 "name": "BaseBdev4", 00:15:15.888 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:15.888 "is_configured": true, 00:15:15.888 "data_offset": 2048, 00:15:15.888 "data_size": 63488 00:15:15.888 } 00:15:15.888 ] 00:15:15.889 }' 00:15:15.889 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.149 "name": "raid_bdev1", 00:15:16.149 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:16.149 "strip_size_kb": 64, 00:15:16.149 "state": "online", 00:15:16.149 "raid_level": "raid5f", 00:15:16.149 "superblock": true, 00:15:16.149 "num_base_bdevs": 4, 00:15:16.149 "num_base_bdevs_discovered": 4, 00:15:16.149 "num_base_bdevs_operational": 4, 00:15:16.149 "base_bdevs_list": [ 00:15:16.149 { 00:15:16.149 "name": "spare", 00:15:16.149 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:16.149 "is_configured": true, 00:15:16.149 "data_offset": 2048, 00:15:16.149 "data_size": 63488 00:15:16.149 }, 00:15:16.149 { 00:15:16.149 "name": "BaseBdev2", 00:15:16.149 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:16.149 "is_configured": true, 00:15:16.149 "data_offset": 2048, 00:15:16.149 "data_size": 63488 00:15:16.149 }, 00:15:16.149 { 00:15:16.149 "name": "BaseBdev3", 00:15:16.149 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:16.149 "is_configured": true, 00:15:16.149 "data_offset": 2048, 00:15:16.149 "data_size": 63488 00:15:16.149 }, 00:15:16.149 { 00:15:16.149 "name": "BaseBdev4", 00:15:16.149 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:16.149 "is_configured": true, 00:15:16.149 "data_offset": 2048, 00:15:16.149 "data_size": 63488 00:15:16.149 } 00:15:16.149 ] 00:15:16.149 }' 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.149 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.410 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.410 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.410 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.410 [2024-11-26 12:58:34.039063] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.410 [2024-11-26 12:58:34.039091] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.410 [2024-11-26 12:58:34.039167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.410 [2024-11-26 12:58:34.039264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.410 [2024-11-26 12:58:34.039285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:16.410 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.410 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.410 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:16.410 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.410 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.410 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.669 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:16.670 /dev/nbd0 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:16.670 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.929 1+0 records in 00:15:16.929 1+0 records out 00:15:16.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499978 s, 8.2 MB/s 00:15:16.929 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:16.930 /dev/nbd1 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:16.930 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.930 1+0 records in 00:15:16.930 1+0 records out 00:15:16.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581377 s, 7.0 MB/s 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:17.189 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.190 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:17.190 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.190 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:17.449 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.449 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.449 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.449 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.449 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.449 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.449 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:17.449 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.449 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.449 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:17.449 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:17.449 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:17.449 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:17.449 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.449 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.449 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:17.709 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:17.709 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.709 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:17.709 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:17.709 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.709 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.709 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.709 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:17.709 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.709 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.709 [2024-11-26 12:58:35.150905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:17.709 [2024-11-26 12:58:35.150962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.709 [2024-11-26 12:58:35.150982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:17.709 [2024-11-26 12:58:35.150992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.709 [2024-11-26 12:58:35.153142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.709 [2024-11-26 12:58:35.153195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:17.710 [2024-11-26 12:58:35.153270] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:17.710 [2024-11-26 12:58:35.153318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.710 [2024-11-26 12:58:35.153434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.710 [2024-11-26 12:58:35.153516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.710 [2024-11-26 12:58:35.153600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:17.710 spare 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.710 [2024-11-26 12:58:35.253496] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:17.710 [2024-11-26 12:58:35.253530] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:17.710 [2024-11-26 12:58:35.253764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:15:17.710 [2024-11-26 12:58:35.254165] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:17.710 [2024-11-26 12:58:35.254195] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:17.710 [2024-11-26 12:58:35.254326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.710 "name": "raid_bdev1", 00:15:17.710 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:17.710 "strip_size_kb": 64, 00:15:17.710 "state": "online", 00:15:17.710 "raid_level": "raid5f", 00:15:17.710 "superblock": true, 00:15:17.710 "num_base_bdevs": 4, 00:15:17.710 "num_base_bdevs_discovered": 4, 00:15:17.710 "num_base_bdevs_operational": 4, 00:15:17.710 "base_bdevs_list": [ 00:15:17.710 { 00:15:17.710 "name": "spare", 00:15:17.710 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:17.710 "is_configured": true, 00:15:17.710 "data_offset": 2048, 00:15:17.710 "data_size": 63488 00:15:17.710 }, 00:15:17.710 { 00:15:17.710 "name": "BaseBdev2", 00:15:17.710 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:17.710 "is_configured": true, 00:15:17.710 "data_offset": 2048, 00:15:17.710 "data_size": 63488 00:15:17.710 }, 00:15:17.710 { 00:15:17.710 "name": "BaseBdev3", 00:15:17.710 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:17.710 "is_configured": true, 00:15:17.710 "data_offset": 2048, 00:15:17.710 "data_size": 63488 00:15:17.710 }, 00:15:17.710 { 00:15:17.710 "name": "BaseBdev4", 00:15:17.710 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:17.710 "is_configured": true, 00:15:17.710 "data_offset": 2048, 00:15:17.710 "data_size": 63488 00:15:17.710 } 00:15:17.710 ] 00:15:17.710 }' 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.710 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.281 "name": "raid_bdev1", 00:15:18.281 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:18.281 "strip_size_kb": 64, 00:15:18.281 "state": "online", 00:15:18.281 "raid_level": "raid5f", 00:15:18.281 "superblock": true, 00:15:18.281 "num_base_bdevs": 4, 00:15:18.281 "num_base_bdevs_discovered": 4, 00:15:18.281 "num_base_bdevs_operational": 4, 00:15:18.281 "base_bdevs_list": [ 00:15:18.281 { 00:15:18.281 "name": "spare", 00:15:18.281 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:18.281 "is_configured": true, 00:15:18.281 "data_offset": 2048, 00:15:18.281 "data_size": 63488 00:15:18.281 }, 00:15:18.281 { 00:15:18.281 "name": "BaseBdev2", 00:15:18.281 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:18.281 "is_configured": true, 00:15:18.281 "data_offset": 2048, 00:15:18.281 "data_size": 63488 00:15:18.281 }, 00:15:18.281 { 00:15:18.281 "name": "BaseBdev3", 00:15:18.281 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:18.281 "is_configured": true, 00:15:18.281 "data_offset": 2048, 00:15:18.281 "data_size": 63488 00:15:18.281 }, 00:15:18.281 { 00:15:18.281 "name": "BaseBdev4", 00:15:18.281 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:18.281 "is_configured": true, 00:15:18.281 "data_offset": 2048, 00:15:18.281 "data_size": 63488 00:15:18.281 } 00:15:18.281 ] 00:15:18.281 }' 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.281 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.541 [2024-11-26 12:58:35.962840] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.541 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.541 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.541 "name": "raid_bdev1", 00:15:18.541 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:18.541 "strip_size_kb": 64, 00:15:18.541 "state": "online", 00:15:18.541 "raid_level": "raid5f", 00:15:18.541 "superblock": true, 00:15:18.541 "num_base_bdevs": 4, 00:15:18.541 "num_base_bdevs_discovered": 3, 00:15:18.541 "num_base_bdevs_operational": 3, 00:15:18.541 "base_bdevs_list": [ 00:15:18.541 { 00:15:18.541 "name": null, 00:15:18.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.541 "is_configured": false, 00:15:18.541 "data_offset": 0, 00:15:18.541 "data_size": 63488 00:15:18.541 }, 00:15:18.541 { 00:15:18.541 "name": "BaseBdev2", 00:15:18.541 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:18.541 "is_configured": true, 00:15:18.541 "data_offset": 2048, 00:15:18.541 "data_size": 63488 00:15:18.541 }, 00:15:18.541 { 00:15:18.541 "name": "BaseBdev3", 00:15:18.541 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:18.541 "is_configured": true, 00:15:18.541 "data_offset": 2048, 00:15:18.541 "data_size": 63488 00:15:18.541 }, 00:15:18.541 { 00:15:18.541 "name": "BaseBdev4", 00:15:18.541 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:18.541 "is_configured": true, 00:15:18.541 "data_offset": 2048, 00:15:18.541 "data_size": 63488 00:15:18.541 } 00:15:18.541 ] 00:15:18.541 }' 00:15:18.541 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.541 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.801 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:18.801 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.801 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.801 [2024-11-26 12:58:36.390123] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.801 [2024-11-26 12:58:36.390355] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:18.801 [2024-11-26 12:58:36.390418] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:18.801 [2024-11-26 12:58:36.390489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.801 [2024-11-26 12:58:36.393647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:15:18.801 [2024-11-26 12:58:36.395841] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.801 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.801 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:19.742 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.742 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.742 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.742 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.742 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.742 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.742 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.742 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.742 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.002 "name": "raid_bdev1", 00:15:20.002 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:20.002 "strip_size_kb": 64, 00:15:20.002 "state": "online", 00:15:20.002 "raid_level": "raid5f", 00:15:20.002 "superblock": true, 00:15:20.002 "num_base_bdevs": 4, 00:15:20.002 "num_base_bdevs_discovered": 4, 00:15:20.002 "num_base_bdevs_operational": 4, 00:15:20.002 "process": { 00:15:20.002 "type": "rebuild", 00:15:20.002 "target": "spare", 00:15:20.002 "progress": { 00:15:20.002 "blocks": 19200, 00:15:20.002 "percent": 10 00:15:20.002 } 00:15:20.002 }, 00:15:20.002 "base_bdevs_list": [ 00:15:20.002 { 00:15:20.002 "name": "spare", 00:15:20.002 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:20.002 "is_configured": true, 00:15:20.002 "data_offset": 2048, 00:15:20.002 "data_size": 63488 00:15:20.002 }, 00:15:20.002 { 00:15:20.002 "name": "BaseBdev2", 00:15:20.002 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:20.002 "is_configured": true, 00:15:20.002 "data_offset": 2048, 00:15:20.002 "data_size": 63488 00:15:20.002 }, 00:15:20.002 { 00:15:20.002 "name": "BaseBdev3", 00:15:20.002 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:20.002 "is_configured": true, 00:15:20.002 "data_offset": 2048, 00:15:20.002 "data_size": 63488 00:15:20.002 }, 00:15:20.002 { 00:15:20.002 "name": "BaseBdev4", 00:15:20.002 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:20.002 "is_configured": true, 00:15:20.002 "data_offset": 2048, 00:15:20.002 "data_size": 63488 00:15:20.002 } 00:15:20.002 ] 00:15:20.002 }' 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.002 [2024-11-26 12:58:37.562795] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:20.002 [2024-11-26 12:58:37.600949] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:20.002 [2024-11-26 12:58:37.601066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.002 [2024-11-26 12:58:37.601088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:20.002 [2024-11-26 12:58:37.601095] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.002 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.002 "name": "raid_bdev1", 00:15:20.002 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:20.002 "strip_size_kb": 64, 00:15:20.002 "state": "online", 00:15:20.002 "raid_level": "raid5f", 00:15:20.002 "superblock": true, 00:15:20.002 "num_base_bdevs": 4, 00:15:20.002 "num_base_bdevs_discovered": 3, 00:15:20.002 "num_base_bdevs_operational": 3, 00:15:20.002 "base_bdevs_list": [ 00:15:20.002 { 00:15:20.002 "name": null, 00:15:20.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.002 "is_configured": false, 00:15:20.002 "data_offset": 0, 00:15:20.002 "data_size": 63488 00:15:20.002 }, 00:15:20.002 { 00:15:20.002 "name": "BaseBdev2", 00:15:20.002 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:20.003 "is_configured": true, 00:15:20.003 "data_offset": 2048, 00:15:20.003 "data_size": 63488 00:15:20.003 }, 00:15:20.003 { 00:15:20.003 "name": "BaseBdev3", 00:15:20.003 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:20.003 "is_configured": true, 00:15:20.003 "data_offset": 2048, 00:15:20.003 "data_size": 63488 00:15:20.003 }, 00:15:20.003 { 00:15:20.003 "name": "BaseBdev4", 00:15:20.003 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:20.003 "is_configured": true, 00:15:20.003 "data_offset": 2048, 00:15:20.003 "data_size": 63488 00:15:20.003 } 00:15:20.003 ] 00:15:20.003 }' 00:15:20.003 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.003 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.573 12:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.573 12:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.573 12:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.573 [2024-11-26 12:58:38.068978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.573 [2024-11-26 12:58:38.069070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.573 [2024-11-26 12:58:38.069114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:20.573 [2024-11-26 12:58:38.069142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.573 [2024-11-26 12:58:38.069570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.573 [2024-11-26 12:58:38.069627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.573 [2024-11-26 12:58:38.069730] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:20.573 [2024-11-26 12:58:38.069768] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:20.573 [2024-11-26 12:58:38.069809] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:20.573 [2024-11-26 12:58:38.069886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.573 [2024-11-26 12:58:38.072549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:15:20.573 [2024-11-26 12:58:38.074654] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:20.573 spare 00:15:20.573 12:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.573 12:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:21.512 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.512 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.512 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.512 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.513 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.513 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.513 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.513 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.513 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.513 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.513 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.513 "name": "raid_bdev1", 00:15:21.513 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:21.513 "strip_size_kb": 64, 00:15:21.513 "state": "online", 00:15:21.513 "raid_level": "raid5f", 00:15:21.513 "superblock": true, 00:15:21.513 "num_base_bdevs": 4, 00:15:21.513 "num_base_bdevs_discovered": 4, 00:15:21.513 "num_base_bdevs_operational": 4, 00:15:21.513 "process": { 00:15:21.513 "type": "rebuild", 00:15:21.513 "target": "spare", 00:15:21.513 "progress": { 00:15:21.513 "blocks": 19200, 00:15:21.513 "percent": 10 00:15:21.513 } 00:15:21.513 }, 00:15:21.513 "base_bdevs_list": [ 00:15:21.513 { 00:15:21.513 "name": "spare", 00:15:21.513 "uuid": "a8df069e-206f-5e6b-b29b-ef23cba81f7a", 00:15:21.513 "is_configured": true, 00:15:21.513 "data_offset": 2048, 00:15:21.513 "data_size": 63488 00:15:21.513 }, 00:15:21.513 { 00:15:21.513 "name": "BaseBdev2", 00:15:21.513 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:21.513 "is_configured": true, 00:15:21.513 "data_offset": 2048, 00:15:21.513 "data_size": 63488 00:15:21.513 }, 00:15:21.513 { 00:15:21.513 "name": "BaseBdev3", 00:15:21.513 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:21.513 "is_configured": true, 00:15:21.513 "data_offset": 2048, 00:15:21.513 "data_size": 63488 00:15:21.513 }, 00:15:21.513 { 00:15:21.513 "name": "BaseBdev4", 00:15:21.513 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:21.513 "is_configured": true, 00:15:21.513 "data_offset": 2048, 00:15:21.513 "data_size": 63488 00:15:21.513 } 00:15:21.513 ] 00:15:21.513 }' 00:15:21.513 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.513 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.513 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.771 [2024-11-26 12:58:39.241264] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.771 [2024-11-26 12:58:39.279809] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:21.771 [2024-11-26 12:58:39.279920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.771 [2024-11-26 12:58:39.279938] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.771 [2024-11-26 12:58:39.279948] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.771 "name": "raid_bdev1", 00:15:21.771 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:21.771 "strip_size_kb": 64, 00:15:21.771 "state": "online", 00:15:21.771 "raid_level": "raid5f", 00:15:21.771 "superblock": true, 00:15:21.771 "num_base_bdevs": 4, 00:15:21.771 "num_base_bdevs_discovered": 3, 00:15:21.771 "num_base_bdevs_operational": 3, 00:15:21.771 "base_bdevs_list": [ 00:15:21.771 { 00:15:21.771 "name": null, 00:15:21.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.771 "is_configured": false, 00:15:21.771 "data_offset": 0, 00:15:21.771 "data_size": 63488 00:15:21.771 }, 00:15:21.771 { 00:15:21.771 "name": "BaseBdev2", 00:15:21.771 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:21.771 "is_configured": true, 00:15:21.771 "data_offset": 2048, 00:15:21.771 "data_size": 63488 00:15:21.771 }, 00:15:21.771 { 00:15:21.771 "name": "BaseBdev3", 00:15:21.771 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:21.771 "is_configured": true, 00:15:21.771 "data_offset": 2048, 00:15:21.771 "data_size": 63488 00:15:21.771 }, 00:15:21.771 { 00:15:21.771 "name": "BaseBdev4", 00:15:21.771 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:21.771 "is_configured": true, 00:15:21.771 "data_offset": 2048, 00:15:21.771 "data_size": 63488 00:15:21.771 } 00:15:21.771 ] 00:15:21.771 }' 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.771 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.340 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.340 "name": "raid_bdev1", 00:15:22.340 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:22.340 "strip_size_kb": 64, 00:15:22.340 "state": "online", 00:15:22.340 "raid_level": "raid5f", 00:15:22.340 "superblock": true, 00:15:22.340 "num_base_bdevs": 4, 00:15:22.340 "num_base_bdevs_discovered": 3, 00:15:22.340 "num_base_bdevs_operational": 3, 00:15:22.340 "base_bdevs_list": [ 00:15:22.340 { 00:15:22.340 "name": null, 00:15:22.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.340 "is_configured": false, 00:15:22.340 "data_offset": 0, 00:15:22.340 "data_size": 63488 00:15:22.340 }, 00:15:22.340 { 00:15:22.340 "name": "BaseBdev2", 00:15:22.340 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:22.341 "is_configured": true, 00:15:22.341 "data_offset": 2048, 00:15:22.341 "data_size": 63488 00:15:22.341 }, 00:15:22.341 { 00:15:22.341 "name": "BaseBdev3", 00:15:22.341 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:22.341 "is_configured": true, 00:15:22.341 "data_offset": 2048, 00:15:22.341 "data_size": 63488 00:15:22.341 }, 00:15:22.341 { 00:15:22.341 "name": "BaseBdev4", 00:15:22.341 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:22.341 "is_configured": true, 00:15:22.341 "data_offset": 2048, 00:15:22.341 "data_size": 63488 00:15:22.341 } 00:15:22.341 ] 00:15:22.341 }' 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.341 [2024-11-26 12:58:39.899630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:22.341 [2024-11-26 12:58:39.899730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.341 [2024-11-26 12:58:39.899769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:22.341 [2024-11-26 12:58:39.899781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.341 [2024-11-26 12:58:39.900209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.341 [2024-11-26 12:58:39.900230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:22.341 [2024-11-26 12:58:39.900296] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:22.341 [2024-11-26 12:58:39.900313] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:22.341 [2024-11-26 12:58:39.900321] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:22.341 [2024-11-26 12:58:39.900332] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:22.341 BaseBdev1 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.341 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.280 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.539 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.539 "name": "raid_bdev1", 00:15:23.539 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:23.539 "strip_size_kb": 64, 00:15:23.539 "state": "online", 00:15:23.539 "raid_level": "raid5f", 00:15:23.539 "superblock": true, 00:15:23.539 "num_base_bdevs": 4, 00:15:23.539 "num_base_bdevs_discovered": 3, 00:15:23.539 "num_base_bdevs_operational": 3, 00:15:23.539 "base_bdevs_list": [ 00:15:23.539 { 00:15:23.539 "name": null, 00:15:23.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.539 "is_configured": false, 00:15:23.539 "data_offset": 0, 00:15:23.539 "data_size": 63488 00:15:23.539 }, 00:15:23.539 { 00:15:23.539 "name": "BaseBdev2", 00:15:23.539 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:23.539 "is_configured": true, 00:15:23.539 "data_offset": 2048, 00:15:23.539 "data_size": 63488 00:15:23.539 }, 00:15:23.539 { 00:15:23.539 "name": "BaseBdev3", 00:15:23.539 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:23.539 "is_configured": true, 00:15:23.539 "data_offset": 2048, 00:15:23.539 "data_size": 63488 00:15:23.539 }, 00:15:23.539 { 00:15:23.539 "name": "BaseBdev4", 00:15:23.539 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:23.539 "is_configured": true, 00:15:23.539 "data_offset": 2048, 00:15:23.539 "data_size": 63488 00:15:23.539 } 00:15:23.539 ] 00:15:23.539 }' 00:15:23.539 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.539 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.799 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.799 "name": "raid_bdev1", 00:15:23.799 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:23.799 "strip_size_kb": 64, 00:15:23.799 "state": "online", 00:15:23.799 "raid_level": "raid5f", 00:15:23.799 "superblock": true, 00:15:23.799 "num_base_bdevs": 4, 00:15:23.799 "num_base_bdevs_discovered": 3, 00:15:23.799 "num_base_bdevs_operational": 3, 00:15:23.799 "base_bdevs_list": [ 00:15:23.799 { 00:15:23.799 "name": null, 00:15:23.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.799 "is_configured": false, 00:15:23.799 "data_offset": 0, 00:15:23.799 "data_size": 63488 00:15:23.799 }, 00:15:23.799 { 00:15:23.799 "name": "BaseBdev2", 00:15:23.799 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:23.799 "is_configured": true, 00:15:23.799 "data_offset": 2048, 00:15:23.799 "data_size": 63488 00:15:23.799 }, 00:15:23.799 { 00:15:23.799 "name": "BaseBdev3", 00:15:23.799 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:23.799 "is_configured": true, 00:15:23.799 "data_offset": 2048, 00:15:23.799 "data_size": 63488 00:15:23.799 }, 00:15:23.799 { 00:15:23.799 "name": "BaseBdev4", 00:15:23.799 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:23.799 "is_configured": true, 00:15:23.799 "data_offset": 2048, 00:15:23.800 "data_size": 63488 00:15:23.800 } 00:15:23.800 ] 00:15:23.800 }' 00:15:23.800 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.059 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.059 [2024-11-26 12:58:41.540799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.059 [2024-11-26 12:58:41.540959] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:24.060 [2024-11-26 12:58:41.541016] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:24.060 request: 00:15:24.060 { 00:15:24.060 "base_bdev": "BaseBdev1", 00:15:24.060 "raid_bdev": "raid_bdev1", 00:15:24.060 "method": "bdev_raid_add_base_bdev", 00:15:24.060 "req_id": 1 00:15:24.060 } 00:15:24.060 Got JSON-RPC error response 00:15:24.060 response: 00:15:24.060 { 00:15:24.060 "code": -22, 00:15:24.060 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:24.060 } 00:15:24.060 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:24.060 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:24.060 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:24.060 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:24.060 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:24.060 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:24.999 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:24.999 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.999 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.999 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.999 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.999 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.999 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.999 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.999 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.000 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.000 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.000 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.000 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.000 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.000 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.000 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.000 "name": "raid_bdev1", 00:15:25.000 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:25.000 "strip_size_kb": 64, 00:15:25.000 "state": "online", 00:15:25.000 "raid_level": "raid5f", 00:15:25.000 "superblock": true, 00:15:25.000 "num_base_bdevs": 4, 00:15:25.000 "num_base_bdevs_discovered": 3, 00:15:25.000 "num_base_bdevs_operational": 3, 00:15:25.000 "base_bdevs_list": [ 00:15:25.000 { 00:15:25.000 "name": null, 00:15:25.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.000 "is_configured": false, 00:15:25.000 "data_offset": 0, 00:15:25.000 "data_size": 63488 00:15:25.000 }, 00:15:25.000 { 00:15:25.000 "name": "BaseBdev2", 00:15:25.000 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:25.000 "is_configured": true, 00:15:25.000 "data_offset": 2048, 00:15:25.000 "data_size": 63488 00:15:25.000 }, 00:15:25.000 { 00:15:25.000 "name": "BaseBdev3", 00:15:25.000 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:25.000 "is_configured": true, 00:15:25.000 "data_offset": 2048, 00:15:25.000 "data_size": 63488 00:15:25.000 }, 00:15:25.000 { 00:15:25.000 "name": "BaseBdev4", 00:15:25.000 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:25.000 "is_configured": true, 00:15:25.000 "data_offset": 2048, 00:15:25.000 "data_size": 63488 00:15:25.000 } 00:15:25.000 ] 00:15:25.000 }' 00:15:25.000 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.000 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.570 "name": "raid_bdev1", 00:15:25.570 "uuid": "497891f3-b46f-4f8f-8565-7455554e4e83", 00:15:25.570 "strip_size_kb": 64, 00:15:25.570 "state": "online", 00:15:25.570 "raid_level": "raid5f", 00:15:25.570 "superblock": true, 00:15:25.570 "num_base_bdevs": 4, 00:15:25.570 "num_base_bdevs_discovered": 3, 00:15:25.570 "num_base_bdevs_operational": 3, 00:15:25.570 "base_bdevs_list": [ 00:15:25.570 { 00:15:25.570 "name": null, 00:15:25.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.570 "is_configured": false, 00:15:25.570 "data_offset": 0, 00:15:25.570 "data_size": 63488 00:15:25.570 }, 00:15:25.570 { 00:15:25.570 "name": "BaseBdev2", 00:15:25.570 "uuid": "b60ed88f-3189-5efa-9808-454aa857669c", 00:15:25.570 "is_configured": true, 00:15:25.570 "data_offset": 2048, 00:15:25.570 "data_size": 63488 00:15:25.570 }, 00:15:25.570 { 00:15:25.570 "name": "BaseBdev3", 00:15:25.570 "uuid": "3a123e4f-9b83-5412-b742-fe8f01990ba5", 00:15:25.570 "is_configured": true, 00:15:25.570 "data_offset": 2048, 00:15:25.570 "data_size": 63488 00:15:25.570 }, 00:15:25.570 { 00:15:25.570 "name": "BaseBdev4", 00:15:25.570 "uuid": "74f485ec-f8f0-5267-8945-6420c7ab8355", 00:15:25.570 "is_configured": true, 00:15:25.570 "data_offset": 2048, 00:15:25.570 "data_size": 63488 00:15:25.570 } 00:15:25.570 ] 00:15:25.570 }' 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95738 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95738 ']' 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95738 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95738 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95738' 00:15:25.570 killing process with pid 95738 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95738 00:15:25.570 Received shutdown signal, test time was about 60.000000 seconds 00:15:25.570 00:15:25.570 Latency(us) 00:15:25.570 [2024-11-26T12:58:43.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.570 [2024-11-26T12:58:43.254Z] =================================================================================================================== 00:15:25.570 [2024-11-26T12:58:43.254Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:25.570 [2024-11-26 12:58:43.229107] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:25.570 [2024-11-26 12:58:43.229230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.570 [2024-11-26 12:58:43.229298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.570 [2024-11-26 12:58:43.229307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:25.570 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95738 00:15:25.830 [2024-11-26 12:58:43.280493] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.090 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:26.090 00:15:26.090 real 0m25.315s 00:15:26.090 user 0m32.213s 00:15:26.090 sys 0m3.125s 00:15:26.090 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.090 ************************************ 00:15:26.090 END TEST raid5f_rebuild_test_sb 00:15:26.091 ************************************ 00:15:26.091 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.091 12:58:43 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:26.091 12:58:43 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:26.091 12:58:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:26.091 12:58:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.091 12:58:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.091 ************************************ 00:15:26.091 START TEST raid_state_function_test_sb_4k 00:15:26.091 ************************************ 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:26.091 Process raid pid: 96537 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96537 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96537' 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96537 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96537 ']' 00:15:26.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.091 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.091 [2024-11-26 12:58:43.694657] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:26.091 [2024-11-26 12:58:43.694803] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.351 [2024-11-26 12:58:43.863707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.351 [2024-11-26 12:58:43.912921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.351 [2024-11-26 12:58:43.956294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.351 [2024-11-26 12:58:43.956334] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.921 [2024-11-26 12:58:44.537728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.921 [2024-11-26 12:58:44.537776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.921 [2024-11-26 12:58:44.537787] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.921 [2024-11-26 12:58:44.537796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.921 "name": "Existed_Raid", 00:15:26.921 "uuid": "c78f422e-a9ad-4327-9608-9b438ee78f89", 00:15:26.921 "strip_size_kb": 0, 00:15:26.921 "state": "configuring", 00:15:26.921 "raid_level": "raid1", 00:15:26.921 "superblock": true, 00:15:26.921 "num_base_bdevs": 2, 00:15:26.921 "num_base_bdevs_discovered": 0, 00:15:26.921 "num_base_bdevs_operational": 2, 00:15:26.921 "base_bdevs_list": [ 00:15:26.921 { 00:15:26.921 "name": "BaseBdev1", 00:15:26.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.921 "is_configured": false, 00:15:26.921 "data_offset": 0, 00:15:26.921 "data_size": 0 00:15:26.921 }, 00:15:26.921 { 00:15:26.921 "name": "BaseBdev2", 00:15:26.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.921 "is_configured": false, 00:15:26.921 "data_offset": 0, 00:15:26.921 "data_size": 0 00:15:26.921 } 00:15:26.921 ] 00:15:26.921 }' 00:15:26.921 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.181 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.442 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:27.442 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.442 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.442 [2024-11-26 12:58:45.004846] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.442 [2024-11-26 12:58:45.004931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.442 [2024-11-26 12:58:45.016856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.442 [2024-11-26 12:58:45.016940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.442 [2024-11-26 12:58:45.016965] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.442 [2024-11-26 12:58:45.016986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.442 [2024-11-26 12:58:45.037980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.442 BaseBdev1 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.442 [ 00:15:27.442 { 00:15:27.442 "name": "BaseBdev1", 00:15:27.442 "aliases": [ 00:15:27.442 "84b7b933-5809-41f9-b21a-33cccea3926e" 00:15:27.442 ], 00:15:27.442 "product_name": "Malloc disk", 00:15:27.442 "block_size": 4096, 00:15:27.442 "num_blocks": 8192, 00:15:27.442 "uuid": "84b7b933-5809-41f9-b21a-33cccea3926e", 00:15:27.442 "assigned_rate_limits": { 00:15:27.442 "rw_ios_per_sec": 0, 00:15:27.442 "rw_mbytes_per_sec": 0, 00:15:27.442 "r_mbytes_per_sec": 0, 00:15:27.442 "w_mbytes_per_sec": 0 00:15:27.442 }, 00:15:27.442 "claimed": true, 00:15:27.442 "claim_type": "exclusive_write", 00:15:27.442 "zoned": false, 00:15:27.442 "supported_io_types": { 00:15:27.442 "read": true, 00:15:27.442 "write": true, 00:15:27.442 "unmap": true, 00:15:27.442 "flush": true, 00:15:27.442 "reset": true, 00:15:27.442 "nvme_admin": false, 00:15:27.442 "nvme_io": false, 00:15:27.442 "nvme_io_md": false, 00:15:27.442 "write_zeroes": true, 00:15:27.442 "zcopy": true, 00:15:27.442 "get_zone_info": false, 00:15:27.442 "zone_management": false, 00:15:27.442 "zone_append": false, 00:15:27.442 "compare": false, 00:15:27.442 "compare_and_write": false, 00:15:27.442 "abort": true, 00:15:27.442 "seek_hole": false, 00:15:27.442 "seek_data": false, 00:15:27.442 "copy": true, 00:15:27.442 "nvme_iov_md": false 00:15:27.442 }, 00:15:27.442 "memory_domains": [ 00:15:27.442 { 00:15:27.442 "dma_device_id": "system", 00:15:27.442 "dma_device_type": 1 00:15:27.442 }, 00:15:27.442 { 00:15:27.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.442 "dma_device_type": 2 00:15:27.442 } 00:15:27.442 ], 00:15:27.442 "driver_specific": {} 00:15:27.442 } 00:15:27.442 ] 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.442 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.702 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.702 "name": "Existed_Raid", 00:15:27.702 "uuid": "df5ae221-df04-4af9-8b51-259a160045cd", 00:15:27.702 "strip_size_kb": 0, 00:15:27.702 "state": "configuring", 00:15:27.702 "raid_level": "raid1", 00:15:27.702 "superblock": true, 00:15:27.702 "num_base_bdevs": 2, 00:15:27.702 "num_base_bdevs_discovered": 1, 00:15:27.702 "num_base_bdevs_operational": 2, 00:15:27.702 "base_bdevs_list": [ 00:15:27.702 { 00:15:27.702 "name": "BaseBdev1", 00:15:27.702 "uuid": "84b7b933-5809-41f9-b21a-33cccea3926e", 00:15:27.702 "is_configured": true, 00:15:27.702 "data_offset": 256, 00:15:27.702 "data_size": 7936 00:15:27.702 }, 00:15:27.702 { 00:15:27.702 "name": "BaseBdev2", 00:15:27.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.702 "is_configured": false, 00:15:27.702 "data_offset": 0, 00:15:27.702 "data_size": 0 00:15:27.702 } 00:15:27.702 ] 00:15:27.702 }' 00:15:27.702 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.702 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.962 [2024-11-26 12:58:45.553125] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.962 [2024-11-26 12:58:45.553162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.962 [2024-11-26 12:58:45.565140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.962 [2024-11-26 12:58:45.566999] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.962 [2024-11-26 12:58:45.567042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.962 "name": "Existed_Raid", 00:15:27.962 "uuid": "642ae7c3-0ad1-4998-b489-6746b633063a", 00:15:27.962 "strip_size_kb": 0, 00:15:27.962 "state": "configuring", 00:15:27.962 "raid_level": "raid1", 00:15:27.962 "superblock": true, 00:15:27.962 "num_base_bdevs": 2, 00:15:27.962 "num_base_bdevs_discovered": 1, 00:15:27.962 "num_base_bdevs_operational": 2, 00:15:27.962 "base_bdevs_list": [ 00:15:27.962 { 00:15:27.962 "name": "BaseBdev1", 00:15:27.962 "uuid": "84b7b933-5809-41f9-b21a-33cccea3926e", 00:15:27.962 "is_configured": true, 00:15:27.962 "data_offset": 256, 00:15:27.962 "data_size": 7936 00:15:27.962 }, 00:15:27.962 { 00:15:27.962 "name": "BaseBdev2", 00:15:27.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.962 "is_configured": false, 00:15:27.962 "data_offset": 0, 00:15:27.962 "data_size": 0 00:15:27.962 } 00:15:27.962 ] 00:15:27.962 }' 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.962 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.533 [2024-11-26 12:58:46.023796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.533 [2024-11-26 12:58:46.024073] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:28.533 [2024-11-26 12:58:46.024117] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:28.533 [2024-11-26 12:58:46.024520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:28.533 BaseBdev2 00:15:28.533 [2024-11-26 12:58:46.024713] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:28.533 [2024-11-26 12:58:46.024735] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:28.533 [2024-11-26 12:58:46.024877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.533 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.533 [ 00:15:28.533 { 00:15:28.533 "name": "BaseBdev2", 00:15:28.533 "aliases": [ 00:15:28.533 "c4a22848-805b-420b-b0ff-b540384df201" 00:15:28.533 ], 00:15:28.533 "product_name": "Malloc disk", 00:15:28.533 "block_size": 4096, 00:15:28.534 "num_blocks": 8192, 00:15:28.534 "uuid": "c4a22848-805b-420b-b0ff-b540384df201", 00:15:28.534 "assigned_rate_limits": { 00:15:28.534 "rw_ios_per_sec": 0, 00:15:28.534 "rw_mbytes_per_sec": 0, 00:15:28.534 "r_mbytes_per_sec": 0, 00:15:28.534 "w_mbytes_per_sec": 0 00:15:28.534 }, 00:15:28.534 "claimed": true, 00:15:28.534 "claim_type": "exclusive_write", 00:15:28.534 "zoned": false, 00:15:28.534 "supported_io_types": { 00:15:28.534 "read": true, 00:15:28.534 "write": true, 00:15:28.534 "unmap": true, 00:15:28.534 "flush": true, 00:15:28.534 "reset": true, 00:15:28.534 "nvme_admin": false, 00:15:28.534 "nvme_io": false, 00:15:28.534 "nvme_io_md": false, 00:15:28.534 "write_zeroes": true, 00:15:28.534 "zcopy": true, 00:15:28.534 "get_zone_info": false, 00:15:28.534 "zone_management": false, 00:15:28.534 "zone_append": false, 00:15:28.534 "compare": false, 00:15:28.534 "compare_and_write": false, 00:15:28.534 "abort": true, 00:15:28.534 "seek_hole": false, 00:15:28.534 "seek_data": false, 00:15:28.534 "copy": true, 00:15:28.534 "nvme_iov_md": false 00:15:28.534 }, 00:15:28.534 "memory_domains": [ 00:15:28.534 { 00:15:28.534 "dma_device_id": "system", 00:15:28.534 "dma_device_type": 1 00:15:28.534 }, 00:15:28.534 { 00:15:28.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.534 "dma_device_type": 2 00:15:28.534 } 00:15:28.534 ], 00:15:28.534 "driver_specific": {} 00:15:28.534 } 00:15:28.534 ] 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.534 "name": "Existed_Raid", 00:15:28.534 "uuid": "642ae7c3-0ad1-4998-b489-6746b633063a", 00:15:28.534 "strip_size_kb": 0, 00:15:28.534 "state": "online", 00:15:28.534 "raid_level": "raid1", 00:15:28.534 "superblock": true, 00:15:28.534 "num_base_bdevs": 2, 00:15:28.534 "num_base_bdevs_discovered": 2, 00:15:28.534 "num_base_bdevs_operational": 2, 00:15:28.534 "base_bdevs_list": [ 00:15:28.534 { 00:15:28.534 "name": "BaseBdev1", 00:15:28.534 "uuid": "84b7b933-5809-41f9-b21a-33cccea3926e", 00:15:28.534 "is_configured": true, 00:15:28.534 "data_offset": 256, 00:15:28.534 "data_size": 7936 00:15:28.534 }, 00:15:28.534 { 00:15:28.534 "name": "BaseBdev2", 00:15:28.534 "uuid": "c4a22848-805b-420b-b0ff-b540384df201", 00:15:28.534 "is_configured": true, 00:15:28.534 "data_offset": 256, 00:15:28.534 "data_size": 7936 00:15:28.534 } 00:15:28.534 ] 00:15:28.534 }' 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.534 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.104 [2024-11-26 12:58:46.551336] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:29.104 "name": "Existed_Raid", 00:15:29.104 "aliases": [ 00:15:29.104 "642ae7c3-0ad1-4998-b489-6746b633063a" 00:15:29.104 ], 00:15:29.104 "product_name": "Raid Volume", 00:15:29.104 "block_size": 4096, 00:15:29.104 "num_blocks": 7936, 00:15:29.104 "uuid": "642ae7c3-0ad1-4998-b489-6746b633063a", 00:15:29.104 "assigned_rate_limits": { 00:15:29.104 "rw_ios_per_sec": 0, 00:15:29.104 "rw_mbytes_per_sec": 0, 00:15:29.104 "r_mbytes_per_sec": 0, 00:15:29.104 "w_mbytes_per_sec": 0 00:15:29.104 }, 00:15:29.104 "claimed": false, 00:15:29.104 "zoned": false, 00:15:29.104 "supported_io_types": { 00:15:29.104 "read": true, 00:15:29.104 "write": true, 00:15:29.104 "unmap": false, 00:15:29.104 "flush": false, 00:15:29.104 "reset": true, 00:15:29.104 "nvme_admin": false, 00:15:29.104 "nvme_io": false, 00:15:29.104 "nvme_io_md": false, 00:15:29.104 "write_zeroes": true, 00:15:29.104 "zcopy": false, 00:15:29.104 "get_zone_info": false, 00:15:29.104 "zone_management": false, 00:15:29.104 "zone_append": false, 00:15:29.104 "compare": false, 00:15:29.104 "compare_and_write": false, 00:15:29.104 "abort": false, 00:15:29.104 "seek_hole": false, 00:15:29.104 "seek_data": false, 00:15:29.104 "copy": false, 00:15:29.104 "nvme_iov_md": false 00:15:29.104 }, 00:15:29.104 "memory_domains": [ 00:15:29.104 { 00:15:29.104 "dma_device_id": "system", 00:15:29.104 "dma_device_type": 1 00:15:29.104 }, 00:15:29.104 { 00:15:29.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.104 "dma_device_type": 2 00:15:29.104 }, 00:15:29.104 { 00:15:29.104 "dma_device_id": "system", 00:15:29.104 "dma_device_type": 1 00:15:29.104 }, 00:15:29.104 { 00:15:29.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.104 "dma_device_type": 2 00:15:29.104 } 00:15:29.104 ], 00:15:29.104 "driver_specific": { 00:15:29.104 "raid": { 00:15:29.104 "uuid": "642ae7c3-0ad1-4998-b489-6746b633063a", 00:15:29.104 "strip_size_kb": 0, 00:15:29.104 "state": "online", 00:15:29.104 "raid_level": "raid1", 00:15:29.104 "superblock": true, 00:15:29.104 "num_base_bdevs": 2, 00:15:29.104 "num_base_bdevs_discovered": 2, 00:15:29.104 "num_base_bdevs_operational": 2, 00:15:29.104 "base_bdevs_list": [ 00:15:29.104 { 00:15:29.104 "name": "BaseBdev1", 00:15:29.104 "uuid": "84b7b933-5809-41f9-b21a-33cccea3926e", 00:15:29.104 "is_configured": true, 00:15:29.104 "data_offset": 256, 00:15:29.104 "data_size": 7936 00:15:29.104 }, 00:15:29.104 { 00:15:29.104 "name": "BaseBdev2", 00:15:29.104 "uuid": "c4a22848-805b-420b-b0ff-b540384df201", 00:15:29.104 "is_configured": true, 00:15:29.104 "data_offset": 256, 00:15:29.104 "data_size": 7936 00:15:29.104 } 00:15:29.104 ] 00:15:29.104 } 00:15:29.104 } 00:15:29.104 }' 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:29.104 BaseBdev2' 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.104 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.104 [2024-11-26 12:58:46.778725] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.387 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.387 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:29.387 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:29.387 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:29.387 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:29.387 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:29.387 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.388 "name": "Existed_Raid", 00:15:29.388 "uuid": "642ae7c3-0ad1-4998-b489-6746b633063a", 00:15:29.388 "strip_size_kb": 0, 00:15:29.388 "state": "online", 00:15:29.388 "raid_level": "raid1", 00:15:29.388 "superblock": true, 00:15:29.388 "num_base_bdevs": 2, 00:15:29.388 "num_base_bdevs_discovered": 1, 00:15:29.388 "num_base_bdevs_operational": 1, 00:15:29.388 "base_bdevs_list": [ 00:15:29.388 { 00:15:29.388 "name": null, 00:15:29.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.388 "is_configured": false, 00:15:29.388 "data_offset": 0, 00:15:29.388 "data_size": 7936 00:15:29.388 }, 00:15:29.388 { 00:15:29.388 "name": "BaseBdev2", 00:15:29.388 "uuid": "c4a22848-805b-420b-b0ff-b540384df201", 00:15:29.388 "is_configured": true, 00:15:29.388 "data_offset": 256, 00:15:29.388 "data_size": 7936 00:15:29.388 } 00:15:29.388 ] 00:15:29.388 }' 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.388 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.699 [2024-11-26 12:58:47.265468] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.699 [2024-11-26 12:58:47.265556] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.699 [2024-11-26 12:58:47.277104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.699 [2024-11-26 12:58:47.277249] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.699 [2024-11-26 12:58:47.277266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96537 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96537 ']' 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96537 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.699 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96537 00:15:29.969 killing process with pid 96537 00:15:29.969 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.969 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.969 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96537' 00:15:29.969 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96537 00:15:29.969 [2024-11-26 12:58:47.372909] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.970 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96537 00:15:29.970 [2024-11-26 12:58:47.373897] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:29.970 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:29.970 00:15:29.970 real 0m4.030s 00:15:29.970 user 0m6.254s 00:15:29.970 sys 0m0.915s 00:15:29.970 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.970 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.970 ************************************ 00:15:29.970 END TEST raid_state_function_test_sb_4k 00:15:29.970 ************************************ 00:15:30.230 12:58:47 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:30.230 12:58:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:30.230 12:58:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.230 12:58:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.230 ************************************ 00:15:30.230 START TEST raid_superblock_test_4k 00:15:30.230 ************************************ 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96773 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96773 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96773 ']' 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.230 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.230 [2024-11-26 12:58:47.799442] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:30.230 [2024-11-26 12:58:47.799588] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96773 ] 00:15:30.490 [2024-11-26 12:58:47.962031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.490 [2024-11-26 12:58:48.008197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.490 [2024-11-26 12:58:48.050337] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.490 [2024-11-26 12:58:48.050371] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.060 malloc1 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.060 [2024-11-26 12:58:48.645222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:31.060 [2024-11-26 12:58:48.645289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.060 [2024-11-26 12:58:48.645306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.060 [2024-11-26 12:58:48.645325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.060 [2024-11-26 12:58:48.647397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.060 [2024-11-26 12:58:48.647440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:31.060 pt1 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.060 malloc2 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.060 [2024-11-26 12:58:48.689143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.060 [2024-11-26 12:58:48.689429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.060 [2024-11-26 12:58:48.689554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:31.060 [2024-11-26 12:58:48.689655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.060 [2024-11-26 12:58:48.694210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.060 [2024-11-26 12:58:48.694342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.060 pt2 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.060 [2024-11-26 12:58:48.702638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:31.060 [2024-11-26 12:58:48.705246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.060 [2024-11-26 12:58:48.705486] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:31.060 [2024-11-26 12:58:48.705551] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:31.060 [2024-11-26 12:58:48.705918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:31.060 [2024-11-26 12:58:48.706151] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:31.060 [2024-11-26 12:58:48.706235] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:31.060 [2024-11-26 12:58:48.706506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.060 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.061 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.061 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.061 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.061 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.061 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.061 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.061 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.061 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.320 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.320 "name": "raid_bdev1", 00:15:31.320 "uuid": "90156e68-6364-4f59-a9e5-47d922cf3b3f", 00:15:31.320 "strip_size_kb": 0, 00:15:31.320 "state": "online", 00:15:31.320 "raid_level": "raid1", 00:15:31.320 "superblock": true, 00:15:31.320 "num_base_bdevs": 2, 00:15:31.320 "num_base_bdevs_discovered": 2, 00:15:31.320 "num_base_bdevs_operational": 2, 00:15:31.320 "base_bdevs_list": [ 00:15:31.320 { 00:15:31.320 "name": "pt1", 00:15:31.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.320 "is_configured": true, 00:15:31.320 "data_offset": 256, 00:15:31.320 "data_size": 7936 00:15:31.320 }, 00:15:31.320 { 00:15:31.320 "name": "pt2", 00:15:31.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.320 "is_configured": true, 00:15:31.320 "data_offset": 256, 00:15:31.320 "data_size": 7936 00:15:31.320 } 00:15:31.320 ] 00:15:31.320 }' 00:15:31.320 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.320 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.579 [2024-11-26 12:58:49.158009] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.579 "name": "raid_bdev1", 00:15:31.579 "aliases": [ 00:15:31.579 "90156e68-6364-4f59-a9e5-47d922cf3b3f" 00:15:31.579 ], 00:15:31.579 "product_name": "Raid Volume", 00:15:31.579 "block_size": 4096, 00:15:31.579 "num_blocks": 7936, 00:15:31.579 "uuid": "90156e68-6364-4f59-a9e5-47d922cf3b3f", 00:15:31.579 "assigned_rate_limits": { 00:15:31.579 "rw_ios_per_sec": 0, 00:15:31.579 "rw_mbytes_per_sec": 0, 00:15:31.579 "r_mbytes_per_sec": 0, 00:15:31.579 "w_mbytes_per_sec": 0 00:15:31.579 }, 00:15:31.579 "claimed": false, 00:15:31.579 "zoned": false, 00:15:31.579 "supported_io_types": { 00:15:31.579 "read": true, 00:15:31.579 "write": true, 00:15:31.579 "unmap": false, 00:15:31.579 "flush": false, 00:15:31.579 "reset": true, 00:15:31.579 "nvme_admin": false, 00:15:31.579 "nvme_io": false, 00:15:31.579 "nvme_io_md": false, 00:15:31.579 "write_zeroes": true, 00:15:31.579 "zcopy": false, 00:15:31.579 "get_zone_info": false, 00:15:31.579 "zone_management": false, 00:15:31.579 "zone_append": false, 00:15:31.579 "compare": false, 00:15:31.579 "compare_and_write": false, 00:15:31.579 "abort": false, 00:15:31.579 "seek_hole": false, 00:15:31.579 "seek_data": false, 00:15:31.579 "copy": false, 00:15:31.579 "nvme_iov_md": false 00:15:31.579 }, 00:15:31.579 "memory_domains": [ 00:15:31.579 { 00:15:31.579 "dma_device_id": "system", 00:15:31.579 "dma_device_type": 1 00:15:31.579 }, 00:15:31.579 { 00:15:31.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.579 "dma_device_type": 2 00:15:31.579 }, 00:15:31.579 { 00:15:31.579 "dma_device_id": "system", 00:15:31.579 "dma_device_type": 1 00:15:31.579 }, 00:15:31.579 { 00:15:31.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.579 "dma_device_type": 2 00:15:31.579 } 00:15:31.579 ], 00:15:31.579 "driver_specific": { 00:15:31.579 "raid": { 00:15:31.579 "uuid": "90156e68-6364-4f59-a9e5-47d922cf3b3f", 00:15:31.579 "strip_size_kb": 0, 00:15:31.579 "state": "online", 00:15:31.579 "raid_level": "raid1", 00:15:31.579 "superblock": true, 00:15:31.579 "num_base_bdevs": 2, 00:15:31.579 "num_base_bdevs_discovered": 2, 00:15:31.579 "num_base_bdevs_operational": 2, 00:15:31.579 "base_bdevs_list": [ 00:15:31.579 { 00:15:31.579 "name": "pt1", 00:15:31.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.579 "is_configured": true, 00:15:31.579 "data_offset": 256, 00:15:31.579 "data_size": 7936 00:15:31.579 }, 00:15:31.579 { 00:15:31.579 "name": "pt2", 00:15:31.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.579 "is_configured": true, 00:15:31.579 "data_offset": 256, 00:15:31.579 "data_size": 7936 00:15:31.579 } 00:15:31.579 ] 00:15:31.579 } 00:15:31.579 } 00:15:31.579 }' 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:31.579 pt2' 00:15:31.579 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.839 [2024-11-26 12:58:49.393519] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=90156e68-6364-4f59-a9e5-47d922cf3b3f 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 90156e68-6364-4f59-a9e5-47d922cf3b3f ']' 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.839 [2024-11-26 12:58:49.433269] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.839 [2024-11-26 12:58:49.433293] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.839 [2024-11-26 12:58:49.433358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.839 [2024-11-26 12:58:49.433419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.839 [2024-11-26 12:58:49.433428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.839 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.840 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:31.840 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.840 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.840 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.100 [2024-11-26 12:58:49.577042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:32.100 [2024-11-26 12:58:49.578825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:32.100 [2024-11-26 12:58:49.578942] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:32.100 [2024-11-26 12:58:49.579014] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:32.100 [2024-11-26 12:58:49.579064] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.100 [2024-11-26 12:58:49.579097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:32.100 request: 00:15:32.100 { 00:15:32.100 "name": "raid_bdev1", 00:15:32.100 "raid_level": "raid1", 00:15:32.100 "base_bdevs": [ 00:15:32.100 "malloc1", 00:15:32.100 "malloc2" 00:15:32.100 ], 00:15:32.100 "superblock": false, 00:15:32.100 "method": "bdev_raid_create", 00:15:32.100 "req_id": 1 00:15:32.100 } 00:15:32.100 Got JSON-RPC error response 00:15:32.100 response: 00:15:32.100 { 00:15:32.100 "code": -17, 00:15:32.100 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:32.100 } 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.100 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.100 [2024-11-26 12:58:49.644900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:32.100 [2024-11-26 12:58:49.645009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.100 [2024-11-26 12:58:49.645042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:32.100 [2024-11-26 12:58:49.645068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.100 [2024-11-26 12:58:49.647020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.101 [2024-11-26 12:58:49.647089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:32.101 [2024-11-26 12:58:49.647164] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:32.101 [2024-11-26 12:58:49.647253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:32.101 pt1 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.101 "name": "raid_bdev1", 00:15:32.101 "uuid": "90156e68-6364-4f59-a9e5-47d922cf3b3f", 00:15:32.101 "strip_size_kb": 0, 00:15:32.101 "state": "configuring", 00:15:32.101 "raid_level": "raid1", 00:15:32.101 "superblock": true, 00:15:32.101 "num_base_bdevs": 2, 00:15:32.101 "num_base_bdevs_discovered": 1, 00:15:32.101 "num_base_bdevs_operational": 2, 00:15:32.101 "base_bdevs_list": [ 00:15:32.101 { 00:15:32.101 "name": "pt1", 00:15:32.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.101 "is_configured": true, 00:15:32.101 "data_offset": 256, 00:15:32.101 "data_size": 7936 00:15:32.101 }, 00:15:32.101 { 00:15:32.101 "name": null, 00:15:32.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.101 "is_configured": false, 00:15:32.101 "data_offset": 256, 00:15:32.101 "data_size": 7936 00:15:32.101 } 00:15:32.101 ] 00:15:32.101 }' 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.101 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.671 [2024-11-26 12:58:50.100174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:32.671 [2024-11-26 12:58:50.100280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.671 [2024-11-26 12:58:50.100315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:32.671 [2024-11-26 12:58:50.100341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.671 [2024-11-26 12:58:50.100666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.671 [2024-11-26 12:58:50.100720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:32.671 [2024-11-26 12:58:50.100796] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:32.671 [2024-11-26 12:58:50.100841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:32.671 [2024-11-26 12:58:50.100931] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:32.671 [2024-11-26 12:58:50.100968] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:32.671 [2024-11-26 12:58:50.101211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:32.671 [2024-11-26 12:58:50.101351] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:32.671 [2024-11-26 12:58:50.101397] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:32.671 [2024-11-26 12:58:50.101513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.671 pt2 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.671 "name": "raid_bdev1", 00:15:32.671 "uuid": "90156e68-6364-4f59-a9e5-47d922cf3b3f", 00:15:32.671 "strip_size_kb": 0, 00:15:32.671 "state": "online", 00:15:32.671 "raid_level": "raid1", 00:15:32.671 "superblock": true, 00:15:32.671 "num_base_bdevs": 2, 00:15:32.671 "num_base_bdevs_discovered": 2, 00:15:32.671 "num_base_bdevs_operational": 2, 00:15:32.671 "base_bdevs_list": [ 00:15:32.671 { 00:15:32.671 "name": "pt1", 00:15:32.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.671 "is_configured": true, 00:15:32.671 "data_offset": 256, 00:15:32.671 "data_size": 7936 00:15:32.671 }, 00:15:32.671 { 00:15:32.671 "name": "pt2", 00:15:32.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.671 "is_configured": true, 00:15:32.671 "data_offset": 256, 00:15:32.671 "data_size": 7936 00:15:32.671 } 00:15:32.671 ] 00:15:32.671 }' 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.671 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.931 [2024-11-26 12:58:50.567782] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.931 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.931 "name": "raid_bdev1", 00:15:32.931 "aliases": [ 00:15:32.931 "90156e68-6364-4f59-a9e5-47d922cf3b3f" 00:15:32.931 ], 00:15:32.931 "product_name": "Raid Volume", 00:15:32.931 "block_size": 4096, 00:15:32.931 "num_blocks": 7936, 00:15:32.931 "uuid": "90156e68-6364-4f59-a9e5-47d922cf3b3f", 00:15:32.931 "assigned_rate_limits": { 00:15:32.931 "rw_ios_per_sec": 0, 00:15:32.931 "rw_mbytes_per_sec": 0, 00:15:32.931 "r_mbytes_per_sec": 0, 00:15:32.931 "w_mbytes_per_sec": 0 00:15:32.931 }, 00:15:32.931 "claimed": false, 00:15:32.931 "zoned": false, 00:15:32.931 "supported_io_types": { 00:15:32.931 "read": true, 00:15:32.931 "write": true, 00:15:32.931 "unmap": false, 00:15:32.931 "flush": false, 00:15:32.931 "reset": true, 00:15:32.931 "nvme_admin": false, 00:15:32.931 "nvme_io": false, 00:15:32.931 "nvme_io_md": false, 00:15:32.931 "write_zeroes": true, 00:15:32.931 "zcopy": false, 00:15:32.931 "get_zone_info": false, 00:15:32.932 "zone_management": false, 00:15:32.932 "zone_append": false, 00:15:32.932 "compare": false, 00:15:32.932 "compare_and_write": false, 00:15:32.932 "abort": false, 00:15:32.932 "seek_hole": false, 00:15:32.932 "seek_data": false, 00:15:32.932 "copy": false, 00:15:32.932 "nvme_iov_md": false 00:15:32.932 }, 00:15:32.932 "memory_domains": [ 00:15:32.932 { 00:15:32.932 "dma_device_id": "system", 00:15:32.932 "dma_device_type": 1 00:15:32.932 }, 00:15:32.932 { 00:15:32.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.932 "dma_device_type": 2 00:15:32.932 }, 00:15:32.932 { 00:15:32.932 "dma_device_id": "system", 00:15:32.932 "dma_device_type": 1 00:15:32.932 }, 00:15:32.932 { 00:15:32.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.932 "dma_device_type": 2 00:15:32.932 } 00:15:32.932 ], 00:15:32.932 "driver_specific": { 00:15:32.932 "raid": { 00:15:32.932 "uuid": "90156e68-6364-4f59-a9e5-47d922cf3b3f", 00:15:32.932 "strip_size_kb": 0, 00:15:32.932 "state": "online", 00:15:32.932 "raid_level": "raid1", 00:15:32.932 "superblock": true, 00:15:32.932 "num_base_bdevs": 2, 00:15:32.932 "num_base_bdevs_discovered": 2, 00:15:32.932 "num_base_bdevs_operational": 2, 00:15:32.932 "base_bdevs_list": [ 00:15:32.932 { 00:15:32.932 "name": "pt1", 00:15:32.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.932 "is_configured": true, 00:15:32.932 "data_offset": 256, 00:15:32.932 "data_size": 7936 00:15:32.932 }, 00:15:32.932 { 00:15:32.932 "name": "pt2", 00:15:32.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.932 "is_configured": true, 00:15:32.932 "data_offset": 256, 00:15:32.932 "data_size": 7936 00:15:32.932 } 00:15:32.932 ] 00:15:32.932 } 00:15:32.932 } 00:15:32.932 }' 00:15:32.932 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:33.193 pt2' 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:33.193 [2024-11-26 12:58:50.775441] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 90156e68-6364-4f59-a9e5-47d922cf3b3f '!=' 90156e68-6364-4f59-a9e5-47d922cf3b3f ']' 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.193 [2024-11-26 12:58:50.827118] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.453 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.453 "name": "raid_bdev1", 00:15:33.453 "uuid": "90156e68-6364-4f59-a9e5-47d922cf3b3f", 00:15:33.453 "strip_size_kb": 0, 00:15:33.453 "state": "online", 00:15:33.453 "raid_level": "raid1", 00:15:33.453 "superblock": true, 00:15:33.453 "num_base_bdevs": 2, 00:15:33.453 "num_base_bdevs_discovered": 1, 00:15:33.453 "num_base_bdevs_operational": 1, 00:15:33.453 "base_bdevs_list": [ 00:15:33.453 { 00:15:33.453 "name": null, 00:15:33.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.453 "is_configured": false, 00:15:33.453 "data_offset": 0, 00:15:33.453 "data_size": 7936 00:15:33.453 }, 00:15:33.453 { 00:15:33.453 "name": "pt2", 00:15:33.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.453 "is_configured": true, 00:15:33.453 "data_offset": 256, 00:15:33.453 "data_size": 7936 00:15:33.453 } 00:15:33.453 ] 00:15:33.453 }' 00:15:33.453 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.453 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.713 [2024-11-26 12:58:51.294266] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.713 [2024-11-26 12:58:51.294289] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.713 [2024-11-26 12:58:51.294351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.713 [2024-11-26 12:58:51.294391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.713 [2024-11-26 12:58:51.294400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.713 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.713 [2024-11-26 12:58:51.366148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:33.713 [2024-11-26 12:58:51.366262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.713 [2024-11-26 12:58:51.366294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:33.714 [2024-11-26 12:58:51.366320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.714 [2024-11-26 12:58:51.368377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.714 [2024-11-26 12:58:51.368444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:33.714 [2024-11-26 12:58:51.368537] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:33.714 [2024-11-26 12:58:51.368593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:33.714 [2024-11-26 12:58:51.368679] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:33.714 [2024-11-26 12:58:51.368715] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:33.714 [2024-11-26 12:58:51.368927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:33.714 [2024-11-26 12:58:51.369069] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:33.714 [2024-11-26 12:58:51.369115] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:33.714 [2024-11-26 12:58:51.369256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.714 pt2 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.714 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.974 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.974 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.974 "name": "raid_bdev1", 00:15:33.974 "uuid": "90156e68-6364-4f59-a9e5-47d922cf3b3f", 00:15:33.974 "strip_size_kb": 0, 00:15:33.974 "state": "online", 00:15:33.974 "raid_level": "raid1", 00:15:33.974 "superblock": true, 00:15:33.974 "num_base_bdevs": 2, 00:15:33.974 "num_base_bdevs_discovered": 1, 00:15:33.974 "num_base_bdevs_operational": 1, 00:15:33.974 "base_bdevs_list": [ 00:15:33.974 { 00:15:33.974 "name": null, 00:15:33.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.974 "is_configured": false, 00:15:33.974 "data_offset": 256, 00:15:33.974 "data_size": 7936 00:15:33.974 }, 00:15:33.974 { 00:15:33.974 "name": "pt2", 00:15:33.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.974 "is_configured": true, 00:15:33.974 "data_offset": 256, 00:15:33.974 "data_size": 7936 00:15:33.974 } 00:15:33.974 ] 00:15:33.974 }' 00:15:33.974 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.974 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.234 [2024-11-26 12:58:51.813390] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.234 [2024-11-26 12:58:51.813450] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.234 [2024-11-26 12:58:51.813528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.234 [2024-11-26 12:58:51.813572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.234 [2024-11-26 12:58:51.813603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.234 [2024-11-26 12:58:51.877259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:34.234 [2024-11-26 12:58:51.877304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.234 [2024-11-26 12:58:51.877322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:34.234 [2024-11-26 12:58:51.877336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.234 [2024-11-26 12:58:51.879293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.234 [2024-11-26 12:58:51.879373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:34.234 [2024-11-26 12:58:51.879431] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:34.234 [2024-11-26 12:58:51.879465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:34.234 [2024-11-26 12:58:51.879553] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:34.234 [2024-11-26 12:58:51.879566] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.234 [2024-11-26 12:58:51.879580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:34.234 [2024-11-26 12:58:51.879613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.234 [2024-11-26 12:58:51.879666] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:34.234 [2024-11-26 12:58:51.879677] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:34.234 [2024-11-26 12:58:51.879885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:34.234 [2024-11-26 12:58:51.879985] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:34.234 [2024-11-26 12:58:51.879994] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:34.234 [2024-11-26 12:58:51.880091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.234 pt1 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.234 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.494 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.494 "name": "raid_bdev1", 00:15:34.494 "uuid": "90156e68-6364-4f59-a9e5-47d922cf3b3f", 00:15:34.494 "strip_size_kb": 0, 00:15:34.494 "state": "online", 00:15:34.494 "raid_level": "raid1", 00:15:34.494 "superblock": true, 00:15:34.494 "num_base_bdevs": 2, 00:15:34.494 "num_base_bdevs_discovered": 1, 00:15:34.494 "num_base_bdevs_operational": 1, 00:15:34.494 "base_bdevs_list": [ 00:15:34.494 { 00:15:34.494 "name": null, 00:15:34.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.494 "is_configured": false, 00:15:34.494 "data_offset": 256, 00:15:34.494 "data_size": 7936 00:15:34.494 }, 00:15:34.494 { 00:15:34.494 "name": "pt2", 00:15:34.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.494 "is_configured": true, 00:15:34.494 "data_offset": 256, 00:15:34.494 "data_size": 7936 00:15:34.494 } 00:15:34.494 ] 00:15:34.494 }' 00:15:34.494 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.494 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.754 12:58:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.755 [2024-11-26 12:58:52.360636] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 90156e68-6364-4f59-a9e5-47d922cf3b3f '!=' 90156e68-6364-4f59-a9e5-47d922cf3b3f ']' 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96773 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96773 ']' 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96773 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:34.755 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96773 00:15:35.015 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.016 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.016 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96773' 00:15:35.016 killing process with pid 96773 00:15:35.016 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96773 00:15:35.016 [2024-11-26 12:58:52.441133] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.016 [2024-11-26 12:58:52.441204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.016 [2024-11-26 12:58:52.441243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.016 [2024-11-26 12:58:52.441251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:35.016 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96773 00:15:35.016 [2024-11-26 12:58:52.463616] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.276 12:58:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:35.276 00:15:35.276 real 0m5.007s 00:15:35.276 user 0m8.110s 00:15:35.276 sys 0m1.106s 00:15:35.276 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:35.276 ************************************ 00:15:35.276 END TEST raid_superblock_test_4k 00:15:35.276 ************************************ 00:15:35.276 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.276 12:58:52 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:35.276 12:58:52 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:35.276 12:58:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:35.276 12:58:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:35.276 12:58:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.276 ************************************ 00:15:35.276 START TEST raid_rebuild_test_sb_4k 00:15:35.276 ************************************ 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=97090 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 97090 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 97090 ']' 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.276 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.277 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.277 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:35.277 Zero copy mechanism will not be used. 00:15:35.277 [2024-11-26 12:58:52.898477] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:35.277 [2024-11-26 12:58:52.898678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97090 ] 00:15:35.537 [2024-11-26 12:58:53.057404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.537 [2024-11-26 12:58:53.103836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.537 [2024-11-26 12:58:53.146516] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.537 [2024-11-26 12:58:53.146622] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.108 BaseBdev1_malloc 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.108 [2024-11-26 12:58:53.745095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:36.108 [2024-11-26 12:58:53.745153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.108 [2024-11-26 12:58:53.745190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:36.108 [2024-11-26 12:58:53.745203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.108 [2024-11-26 12:58:53.747411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.108 [2024-11-26 12:58:53.747447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:36.108 BaseBdev1 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.108 BaseBdev2_malloc 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.108 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.369 [2024-11-26 12:58:53.787086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:36.369 [2024-11-26 12:58:53.787207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.369 [2024-11-26 12:58:53.787251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:36.369 [2024-11-26 12:58:53.787271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.369 [2024-11-26 12:58:53.791638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.369 [2024-11-26 12:58:53.791706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:36.369 BaseBdev2 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.369 spare_malloc 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.369 spare_delay 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.369 [2024-11-26 12:58:53.829926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:36.369 [2024-11-26 12:58:53.830020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.369 [2024-11-26 12:58:53.830060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:36.369 [2024-11-26 12:58:53.830068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.369 [2024-11-26 12:58:53.832110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.369 [2024-11-26 12:58:53.832147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:36.369 spare 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.369 [2024-11-26 12:58:53.841933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.369 [2024-11-26 12:58:53.843708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.369 [2024-11-26 12:58:53.843882] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:36.369 [2024-11-26 12:58:53.843909] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:36.369 [2024-11-26 12:58:53.844148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:36.369 [2024-11-26 12:58:53.844279] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:36.369 [2024-11-26 12:58:53.844304] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:36.369 [2024-11-26 12:58:53.844417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.369 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.369 "name": "raid_bdev1", 00:15:36.369 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:36.369 "strip_size_kb": 0, 00:15:36.369 "state": "online", 00:15:36.369 "raid_level": "raid1", 00:15:36.369 "superblock": true, 00:15:36.369 "num_base_bdevs": 2, 00:15:36.369 "num_base_bdevs_discovered": 2, 00:15:36.369 "num_base_bdevs_operational": 2, 00:15:36.369 "base_bdevs_list": [ 00:15:36.369 { 00:15:36.369 "name": "BaseBdev1", 00:15:36.369 "uuid": "9ade3dfe-aad9-5401-be0f-f03917b44041", 00:15:36.369 "is_configured": true, 00:15:36.369 "data_offset": 256, 00:15:36.369 "data_size": 7936 00:15:36.369 }, 00:15:36.369 { 00:15:36.369 "name": "BaseBdev2", 00:15:36.370 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:36.370 "is_configured": true, 00:15:36.370 "data_offset": 256, 00:15:36.370 "data_size": 7936 00:15:36.370 } 00:15:36.370 ] 00:15:36.370 }' 00:15:36.370 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.370 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.629 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.629 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.629 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.629 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:36.629 [2024-11-26 12:58:54.289453] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.629 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:36.889 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:36.890 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:36.890 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:36.890 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:36.890 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:36.890 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:36.890 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:36.890 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:36.890 [2024-11-26 12:58:54.560750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:37.150 /dev/nbd0 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.150 1+0 records in 00:15:37.150 1+0 records out 00:15:37.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437532 s, 9.4 MB/s 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:37.150 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:37.151 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:37.720 7936+0 records in 00:15:37.720 7936+0 records out 00:15:37.720 32505856 bytes (33 MB, 31 MiB) copied, 0.597376 s, 54.4 MB/s 00:15:37.720 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:37.720 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.720 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:37.720 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:37.720 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:37.720 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.720 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:37.981 [2024-11-26 12:58:55.425445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.981 [2024-11-26 12:58:55.461448] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.981 "name": "raid_bdev1", 00:15:37.981 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:37.981 "strip_size_kb": 0, 00:15:37.981 "state": "online", 00:15:37.981 "raid_level": "raid1", 00:15:37.981 "superblock": true, 00:15:37.981 "num_base_bdevs": 2, 00:15:37.981 "num_base_bdevs_discovered": 1, 00:15:37.981 "num_base_bdevs_operational": 1, 00:15:37.981 "base_bdevs_list": [ 00:15:37.981 { 00:15:37.981 "name": null, 00:15:37.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.981 "is_configured": false, 00:15:37.981 "data_offset": 0, 00:15:37.981 "data_size": 7936 00:15:37.981 }, 00:15:37.981 { 00:15:37.981 "name": "BaseBdev2", 00:15:37.981 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:37.981 "is_configured": true, 00:15:37.981 "data_offset": 256, 00:15:37.981 "data_size": 7936 00:15:37.981 } 00:15:37.981 ] 00:15:37.981 }' 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.981 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:38.550 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.550 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 [2024-11-26 12:58:55.968659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.550 [2024-11-26 12:58:55.972870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:15:38.550 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.550 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:38.550 [2024-11-26 12:58:55.974778] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:39.491 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.491 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.491 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.491 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.491 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.491 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.491 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.491 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.491 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.491 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.491 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.491 "name": "raid_bdev1", 00:15:39.491 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:39.491 "strip_size_kb": 0, 00:15:39.491 "state": "online", 00:15:39.491 "raid_level": "raid1", 00:15:39.491 "superblock": true, 00:15:39.491 "num_base_bdevs": 2, 00:15:39.491 "num_base_bdevs_discovered": 2, 00:15:39.491 "num_base_bdevs_operational": 2, 00:15:39.491 "process": { 00:15:39.491 "type": "rebuild", 00:15:39.491 "target": "spare", 00:15:39.491 "progress": { 00:15:39.491 "blocks": 2560, 00:15:39.491 "percent": 32 00:15:39.491 } 00:15:39.491 }, 00:15:39.491 "base_bdevs_list": [ 00:15:39.491 { 00:15:39.491 "name": "spare", 00:15:39.491 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:39.491 "is_configured": true, 00:15:39.491 "data_offset": 256, 00:15:39.491 "data_size": 7936 00:15:39.491 }, 00:15:39.491 { 00:15:39.491 "name": "BaseBdev2", 00:15:39.491 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:39.491 "is_configured": true, 00:15:39.491 "data_offset": 256, 00:15:39.491 "data_size": 7936 00:15:39.491 } 00:15:39.491 ] 00:15:39.491 }' 00:15:39.491 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.491 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.491 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.491 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.491 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:39.491 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.491 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.491 [2024-11-26 12:58:57.139579] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.752 [2024-11-26 12:58:57.179296] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:39.752 [2024-11-26 12:58:57.179392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.752 [2024-11-26 12:58:57.179446] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.752 [2024-11-26 12:58:57.179467] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.752 "name": "raid_bdev1", 00:15:39.752 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:39.752 "strip_size_kb": 0, 00:15:39.752 "state": "online", 00:15:39.752 "raid_level": "raid1", 00:15:39.752 "superblock": true, 00:15:39.752 "num_base_bdevs": 2, 00:15:39.752 "num_base_bdevs_discovered": 1, 00:15:39.752 "num_base_bdevs_operational": 1, 00:15:39.752 "base_bdevs_list": [ 00:15:39.752 { 00:15:39.752 "name": null, 00:15:39.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.752 "is_configured": false, 00:15:39.752 "data_offset": 0, 00:15:39.752 "data_size": 7936 00:15:39.752 }, 00:15:39.752 { 00:15:39.752 "name": "BaseBdev2", 00:15:39.752 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:39.752 "is_configured": true, 00:15:39.752 "data_offset": 256, 00:15:39.752 "data_size": 7936 00:15:39.752 } 00:15:39.752 ] 00:15:39.752 }' 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.752 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.010 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.010 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.010 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.010 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.010 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.010 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.010 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.010 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.010 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.011 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.011 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.011 "name": "raid_bdev1", 00:15:40.011 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:40.011 "strip_size_kb": 0, 00:15:40.011 "state": "online", 00:15:40.011 "raid_level": "raid1", 00:15:40.011 "superblock": true, 00:15:40.011 "num_base_bdevs": 2, 00:15:40.011 "num_base_bdevs_discovered": 1, 00:15:40.011 "num_base_bdevs_operational": 1, 00:15:40.011 "base_bdevs_list": [ 00:15:40.011 { 00:15:40.011 "name": null, 00:15:40.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.011 "is_configured": false, 00:15:40.011 "data_offset": 0, 00:15:40.011 "data_size": 7936 00:15:40.011 }, 00:15:40.011 { 00:15:40.011 "name": "BaseBdev2", 00:15:40.011 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:40.011 "is_configured": true, 00:15:40.011 "data_offset": 256, 00:15:40.011 "data_size": 7936 00:15:40.011 } 00:15:40.011 ] 00:15:40.011 }' 00:15:40.011 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.270 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.270 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.270 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.270 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.270 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.270 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.270 [2024-11-26 12:58:57.790549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.270 [2024-11-26 12:58:57.793584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:15:40.270 [2024-11-26 12:58:57.795397] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.270 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.270 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.210 "name": "raid_bdev1", 00:15:41.210 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:41.210 "strip_size_kb": 0, 00:15:41.210 "state": "online", 00:15:41.210 "raid_level": "raid1", 00:15:41.210 "superblock": true, 00:15:41.210 "num_base_bdevs": 2, 00:15:41.210 "num_base_bdevs_discovered": 2, 00:15:41.210 "num_base_bdevs_operational": 2, 00:15:41.210 "process": { 00:15:41.210 "type": "rebuild", 00:15:41.210 "target": "spare", 00:15:41.210 "progress": { 00:15:41.210 "blocks": 2560, 00:15:41.210 "percent": 32 00:15:41.210 } 00:15:41.210 }, 00:15:41.210 "base_bdevs_list": [ 00:15:41.210 { 00:15:41.210 "name": "spare", 00:15:41.210 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:41.210 "is_configured": true, 00:15:41.210 "data_offset": 256, 00:15:41.210 "data_size": 7936 00:15:41.210 }, 00:15:41.210 { 00:15:41.210 "name": "BaseBdev2", 00:15:41.210 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:41.210 "is_configured": true, 00:15:41.210 "data_offset": 256, 00:15:41.210 "data_size": 7936 00:15:41.210 } 00:15:41.210 ] 00:15:41.210 }' 00:15:41.210 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:41.470 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=562 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.470 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.470 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.470 "name": "raid_bdev1", 00:15:41.470 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:41.470 "strip_size_kb": 0, 00:15:41.470 "state": "online", 00:15:41.470 "raid_level": "raid1", 00:15:41.470 "superblock": true, 00:15:41.470 "num_base_bdevs": 2, 00:15:41.470 "num_base_bdevs_discovered": 2, 00:15:41.470 "num_base_bdevs_operational": 2, 00:15:41.470 "process": { 00:15:41.470 "type": "rebuild", 00:15:41.470 "target": "spare", 00:15:41.470 "progress": { 00:15:41.470 "blocks": 2816, 00:15:41.470 "percent": 35 00:15:41.470 } 00:15:41.470 }, 00:15:41.470 "base_bdevs_list": [ 00:15:41.470 { 00:15:41.470 "name": "spare", 00:15:41.470 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:41.470 "is_configured": true, 00:15:41.470 "data_offset": 256, 00:15:41.470 "data_size": 7936 00:15:41.470 }, 00:15:41.470 { 00:15:41.470 "name": "BaseBdev2", 00:15:41.471 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:41.471 "is_configured": true, 00:15:41.471 "data_offset": 256, 00:15:41.471 "data_size": 7936 00:15:41.471 } 00:15:41.471 ] 00:15:41.471 }' 00:15:41.471 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.471 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.471 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.471 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.471 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.854 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.854 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.854 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.854 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.854 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.854 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.854 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.854 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.854 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.854 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.855 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.855 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.855 "name": "raid_bdev1", 00:15:42.855 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:42.855 "strip_size_kb": 0, 00:15:42.855 "state": "online", 00:15:42.855 "raid_level": "raid1", 00:15:42.855 "superblock": true, 00:15:42.855 "num_base_bdevs": 2, 00:15:42.855 "num_base_bdevs_discovered": 2, 00:15:42.855 "num_base_bdevs_operational": 2, 00:15:42.855 "process": { 00:15:42.855 "type": "rebuild", 00:15:42.855 "target": "spare", 00:15:42.855 "progress": { 00:15:42.855 "blocks": 5888, 00:15:42.855 "percent": 74 00:15:42.855 } 00:15:42.855 }, 00:15:42.855 "base_bdevs_list": [ 00:15:42.855 { 00:15:42.855 "name": "spare", 00:15:42.855 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:42.855 "is_configured": true, 00:15:42.855 "data_offset": 256, 00:15:42.855 "data_size": 7936 00:15:42.855 }, 00:15:42.855 { 00:15:42.855 "name": "BaseBdev2", 00:15:42.855 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:42.855 "is_configured": true, 00:15:42.855 "data_offset": 256, 00:15:42.855 "data_size": 7936 00:15:42.855 } 00:15:42.855 ] 00:15:42.855 }' 00:15:42.855 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.855 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.855 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.855 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.855 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.425 [2024-11-26 12:59:00.905293] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:43.425 [2024-11-26 12:59:00.905428] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:43.425 [2024-11-26 12:59:00.905570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.685 "name": "raid_bdev1", 00:15:43.685 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:43.685 "strip_size_kb": 0, 00:15:43.685 "state": "online", 00:15:43.685 "raid_level": "raid1", 00:15:43.685 "superblock": true, 00:15:43.685 "num_base_bdevs": 2, 00:15:43.685 "num_base_bdevs_discovered": 2, 00:15:43.685 "num_base_bdevs_operational": 2, 00:15:43.685 "base_bdevs_list": [ 00:15:43.685 { 00:15:43.685 "name": "spare", 00:15:43.685 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:43.685 "is_configured": true, 00:15:43.685 "data_offset": 256, 00:15:43.685 "data_size": 7936 00:15:43.685 }, 00:15:43.685 { 00:15:43.685 "name": "BaseBdev2", 00:15:43.685 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:43.685 "is_configured": true, 00:15:43.685 "data_offset": 256, 00:15:43.685 "data_size": 7936 00:15:43.685 } 00:15:43.685 ] 00:15:43.685 }' 00:15:43.685 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.945 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.946 "name": "raid_bdev1", 00:15:43.946 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:43.946 "strip_size_kb": 0, 00:15:43.946 "state": "online", 00:15:43.946 "raid_level": "raid1", 00:15:43.946 "superblock": true, 00:15:43.946 "num_base_bdevs": 2, 00:15:43.946 "num_base_bdevs_discovered": 2, 00:15:43.946 "num_base_bdevs_operational": 2, 00:15:43.946 "base_bdevs_list": [ 00:15:43.946 { 00:15:43.946 "name": "spare", 00:15:43.946 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:43.946 "is_configured": true, 00:15:43.946 "data_offset": 256, 00:15:43.946 "data_size": 7936 00:15:43.946 }, 00:15:43.946 { 00:15:43.946 "name": "BaseBdev2", 00:15:43.946 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:43.946 "is_configured": true, 00:15:43.946 "data_offset": 256, 00:15:43.946 "data_size": 7936 00:15:43.946 } 00:15:43.946 ] 00:15:43.946 }' 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.946 "name": "raid_bdev1", 00:15:43.946 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:43.946 "strip_size_kb": 0, 00:15:43.946 "state": "online", 00:15:43.946 "raid_level": "raid1", 00:15:43.946 "superblock": true, 00:15:43.946 "num_base_bdevs": 2, 00:15:43.946 "num_base_bdevs_discovered": 2, 00:15:43.946 "num_base_bdevs_operational": 2, 00:15:43.946 "base_bdevs_list": [ 00:15:43.946 { 00:15:43.946 "name": "spare", 00:15:43.946 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:43.946 "is_configured": true, 00:15:43.946 "data_offset": 256, 00:15:43.946 "data_size": 7936 00:15:43.946 }, 00:15:43.946 { 00:15:43.946 "name": "BaseBdev2", 00:15:43.946 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:43.946 "is_configured": true, 00:15:43.946 "data_offset": 256, 00:15:43.946 "data_size": 7936 00:15:43.946 } 00:15:43.946 ] 00:15:43.946 }' 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.946 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.516 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:44.516 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.516 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.516 [2024-11-26 12:59:01.999408] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.516 [2024-11-26 12:59:01.999475] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.516 [2024-11-26 12:59:01.999574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.516 [2024-11-26 12:59:01.999635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.516 [2024-11-26 12:59:01.999646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.516 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:44.776 /dev/nbd0 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.776 1+0 records in 00:15:44.776 1+0 records out 00:15:44.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589492 s, 6.9 MB/s 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.776 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:45.036 /dev/nbd1 00:15:45.036 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:45.036 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:45.036 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:45.036 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:45.036 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:45.036 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:45.036 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:45.036 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:45.036 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:45.036 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.037 1+0 records in 00:15:45.037 1+0 records out 00:15:45.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289526 s, 14.1 MB/s 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.037 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:45.297 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.297 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.297 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.297 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.297 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.297 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.297 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:45.297 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.297 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.297 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.557 [2024-11-26 12:59:03.091294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:45.557 [2024-11-26 12:59:03.091350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.557 [2024-11-26 12:59:03.091372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:45.557 [2024-11-26 12:59:03.091385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.557 [2024-11-26 12:59:03.093532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.557 [2024-11-26 12:59:03.093572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:45.557 [2024-11-26 12:59:03.093635] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:45.557 [2024-11-26 12:59:03.093677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.557 [2024-11-26 12:59:03.093772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.557 spare 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.557 [2024-11-26 12:59:03.193655] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:45.557 [2024-11-26 12:59:03.193677] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:45.557 [2024-11-26 12:59:03.193899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:15:45.557 [2024-11-26 12:59:03.194024] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:45.557 [2024-11-26 12:59:03.194037] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:45.557 [2024-11-26 12:59:03.194153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.557 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.818 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.818 "name": "raid_bdev1", 00:15:45.818 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:45.818 "strip_size_kb": 0, 00:15:45.818 "state": "online", 00:15:45.818 "raid_level": "raid1", 00:15:45.818 "superblock": true, 00:15:45.818 "num_base_bdevs": 2, 00:15:45.818 "num_base_bdevs_discovered": 2, 00:15:45.818 "num_base_bdevs_operational": 2, 00:15:45.818 "base_bdevs_list": [ 00:15:45.818 { 00:15:45.818 "name": "spare", 00:15:45.818 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:45.818 "is_configured": true, 00:15:45.818 "data_offset": 256, 00:15:45.818 "data_size": 7936 00:15:45.818 }, 00:15:45.818 { 00:15:45.818 "name": "BaseBdev2", 00:15:45.818 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:45.818 "is_configured": true, 00:15:45.818 "data_offset": 256, 00:15:45.818 "data_size": 7936 00:15:45.818 } 00:15:45.818 ] 00:15:45.818 }' 00:15:45.818 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.818 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.078 "name": "raid_bdev1", 00:15:46.078 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:46.078 "strip_size_kb": 0, 00:15:46.078 "state": "online", 00:15:46.078 "raid_level": "raid1", 00:15:46.078 "superblock": true, 00:15:46.078 "num_base_bdevs": 2, 00:15:46.078 "num_base_bdevs_discovered": 2, 00:15:46.078 "num_base_bdevs_operational": 2, 00:15:46.078 "base_bdevs_list": [ 00:15:46.078 { 00:15:46.078 "name": "spare", 00:15:46.078 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:46.078 "is_configured": true, 00:15:46.078 "data_offset": 256, 00:15:46.078 "data_size": 7936 00:15:46.078 }, 00:15:46.078 { 00:15:46.078 "name": "BaseBdev2", 00:15:46.078 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:46.078 "is_configured": true, 00:15:46.078 "data_offset": 256, 00:15:46.078 "data_size": 7936 00:15:46.078 } 00:15:46.078 ] 00:15:46.078 }' 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.078 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.339 [2024-11-26 12:59:03.850022] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.339 "name": "raid_bdev1", 00:15:46.339 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:46.339 "strip_size_kb": 0, 00:15:46.339 "state": "online", 00:15:46.339 "raid_level": "raid1", 00:15:46.339 "superblock": true, 00:15:46.339 "num_base_bdevs": 2, 00:15:46.339 "num_base_bdevs_discovered": 1, 00:15:46.339 "num_base_bdevs_operational": 1, 00:15:46.339 "base_bdevs_list": [ 00:15:46.339 { 00:15:46.339 "name": null, 00:15:46.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.339 "is_configured": false, 00:15:46.339 "data_offset": 0, 00:15:46.339 "data_size": 7936 00:15:46.339 }, 00:15:46.339 { 00:15:46.339 "name": "BaseBdev2", 00:15:46.339 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:46.339 "is_configured": true, 00:15:46.339 "data_offset": 256, 00:15:46.339 "data_size": 7936 00:15:46.339 } 00:15:46.339 ] 00:15:46.339 }' 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.909 12:59:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:46.909 12:59:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.909 12:59:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.909 [2024-11-26 12:59:04.305258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.909 [2024-11-26 12:59:04.305432] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.909 [2024-11-26 12:59:04.305489] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:46.909 [2024-11-26 12:59:04.305537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.909 [2024-11-26 12:59:04.309487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:15:46.909 12:59:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.909 12:59:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:46.909 [2024-11-26 12:59:04.311370] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.850 "name": "raid_bdev1", 00:15:47.850 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:47.850 "strip_size_kb": 0, 00:15:47.850 "state": "online", 00:15:47.850 "raid_level": "raid1", 00:15:47.850 "superblock": true, 00:15:47.850 "num_base_bdevs": 2, 00:15:47.850 "num_base_bdevs_discovered": 2, 00:15:47.850 "num_base_bdevs_operational": 2, 00:15:47.850 "process": { 00:15:47.850 "type": "rebuild", 00:15:47.850 "target": "spare", 00:15:47.850 "progress": { 00:15:47.850 "blocks": 2560, 00:15:47.850 "percent": 32 00:15:47.850 } 00:15:47.850 }, 00:15:47.850 "base_bdevs_list": [ 00:15:47.850 { 00:15:47.850 "name": "spare", 00:15:47.850 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:47.850 "is_configured": true, 00:15:47.850 "data_offset": 256, 00:15:47.850 "data_size": 7936 00:15:47.850 }, 00:15:47.850 { 00:15:47.850 "name": "BaseBdev2", 00:15:47.850 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:47.850 "is_configured": true, 00:15:47.850 "data_offset": 256, 00:15:47.850 "data_size": 7936 00:15:47.850 } 00:15:47.850 ] 00:15:47.850 }' 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.850 [2024-11-26 12:59:05.460739] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.850 [2024-11-26 12:59:05.515279] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:47.850 [2024-11-26 12:59:05.515329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.850 [2024-11-26 12:59:05.515376] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.850 [2024-11-26 12:59:05.515384] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:47.850 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.851 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:47.851 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.851 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.851 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.851 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.851 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:47.851 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.851 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.851 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.851 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.110 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.110 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.110 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.110 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.110 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.110 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.110 "name": "raid_bdev1", 00:15:48.110 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:48.110 "strip_size_kb": 0, 00:15:48.110 "state": "online", 00:15:48.110 "raid_level": "raid1", 00:15:48.110 "superblock": true, 00:15:48.110 "num_base_bdevs": 2, 00:15:48.110 "num_base_bdevs_discovered": 1, 00:15:48.110 "num_base_bdevs_operational": 1, 00:15:48.110 "base_bdevs_list": [ 00:15:48.110 { 00:15:48.110 "name": null, 00:15:48.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.110 "is_configured": false, 00:15:48.110 "data_offset": 0, 00:15:48.110 "data_size": 7936 00:15:48.110 }, 00:15:48.110 { 00:15:48.110 "name": "BaseBdev2", 00:15:48.110 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:48.110 "is_configured": true, 00:15:48.110 "data_offset": 256, 00:15:48.110 "data_size": 7936 00:15:48.110 } 00:15:48.110 ] 00:15:48.110 }' 00:15:48.110 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.110 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.370 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.370 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.370 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.370 [2024-11-26 12:59:05.990187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.370 [2024-11-26 12:59:05.990310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.370 [2024-11-26 12:59:05.990350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:48.370 [2024-11-26 12:59:05.990377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.370 [2024-11-26 12:59:05.990792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.370 [2024-11-26 12:59:05.990849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.370 [2024-11-26 12:59:05.990951] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:48.370 [2024-11-26 12:59:05.990990] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:48.370 [2024-11-26 12:59:05.991035] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:48.370 [2024-11-26 12:59:05.991099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.370 [2024-11-26 12:59:05.994306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:15:48.370 spare 00:15:48.370 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.370 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:48.370 [2024-11-26 12:59:05.996276] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:49.346 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.346 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.346 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.346 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.346 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.346 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.346 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.346 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.346 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.607 "name": "raid_bdev1", 00:15:49.607 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:49.607 "strip_size_kb": 0, 00:15:49.607 "state": "online", 00:15:49.607 "raid_level": "raid1", 00:15:49.607 "superblock": true, 00:15:49.607 "num_base_bdevs": 2, 00:15:49.607 "num_base_bdevs_discovered": 2, 00:15:49.607 "num_base_bdevs_operational": 2, 00:15:49.607 "process": { 00:15:49.607 "type": "rebuild", 00:15:49.607 "target": "spare", 00:15:49.607 "progress": { 00:15:49.607 "blocks": 2560, 00:15:49.607 "percent": 32 00:15:49.607 } 00:15:49.607 }, 00:15:49.607 "base_bdevs_list": [ 00:15:49.607 { 00:15:49.607 "name": "spare", 00:15:49.607 "uuid": "f25e0fb7-c75f-52c0-b24d-a58f34bd83b6", 00:15:49.607 "is_configured": true, 00:15:49.607 "data_offset": 256, 00:15:49.607 "data_size": 7936 00:15:49.607 }, 00:15:49.607 { 00:15:49.607 "name": "BaseBdev2", 00:15:49.607 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:49.607 "is_configured": true, 00:15:49.607 "data_offset": 256, 00:15:49.607 "data_size": 7936 00:15:49.607 } 00:15:49.607 ] 00:15:49.607 }' 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.607 [2024-11-26 12:59:07.137079] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.607 [2024-11-26 12:59:07.200144] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:49.607 [2024-11-26 12:59:07.200214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.607 [2024-11-26 12:59:07.200228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.607 [2024-11-26 12:59:07.200238] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.607 "name": "raid_bdev1", 00:15:49.607 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:49.607 "strip_size_kb": 0, 00:15:49.607 "state": "online", 00:15:49.607 "raid_level": "raid1", 00:15:49.607 "superblock": true, 00:15:49.607 "num_base_bdevs": 2, 00:15:49.607 "num_base_bdevs_discovered": 1, 00:15:49.607 "num_base_bdevs_operational": 1, 00:15:49.607 "base_bdevs_list": [ 00:15:49.607 { 00:15:49.607 "name": null, 00:15:49.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.607 "is_configured": false, 00:15:49.607 "data_offset": 0, 00:15:49.607 "data_size": 7936 00:15:49.607 }, 00:15:49.607 { 00:15:49.607 "name": "BaseBdev2", 00:15:49.607 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:49.607 "is_configured": true, 00:15:49.607 "data_offset": 256, 00:15:49.607 "data_size": 7936 00:15:49.607 } 00:15:49.607 ] 00:15:49.607 }' 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.607 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.178 "name": "raid_bdev1", 00:15:50.178 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:50.178 "strip_size_kb": 0, 00:15:50.178 "state": "online", 00:15:50.178 "raid_level": "raid1", 00:15:50.178 "superblock": true, 00:15:50.178 "num_base_bdevs": 2, 00:15:50.178 "num_base_bdevs_discovered": 1, 00:15:50.178 "num_base_bdevs_operational": 1, 00:15:50.178 "base_bdevs_list": [ 00:15:50.178 { 00:15:50.178 "name": null, 00:15:50.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.178 "is_configured": false, 00:15:50.178 "data_offset": 0, 00:15:50.178 "data_size": 7936 00:15:50.178 }, 00:15:50.178 { 00:15:50.178 "name": "BaseBdev2", 00:15:50.178 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:50.178 "is_configured": true, 00:15:50.178 "data_offset": 256, 00:15:50.178 "data_size": 7936 00:15:50.178 } 00:15:50.178 ] 00:15:50.178 }' 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.178 [2024-11-26 12:59:07.827038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:50.178 [2024-11-26 12:59:07.827089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.178 [2024-11-26 12:59:07.827106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:50.178 [2024-11-26 12:59:07.827116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.178 [2024-11-26 12:59:07.827668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.178 [2024-11-26 12:59:07.827754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:50.178 [2024-11-26 12:59:07.827860] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:50.178 [2024-11-26 12:59:07.827908] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.178 [2024-11-26 12:59:07.827921] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:50.178 [2024-11-26 12:59:07.827934] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:50.178 BaseBdev1 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.178 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.561 "name": "raid_bdev1", 00:15:51.561 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:51.561 "strip_size_kb": 0, 00:15:51.561 "state": "online", 00:15:51.561 "raid_level": "raid1", 00:15:51.561 "superblock": true, 00:15:51.561 "num_base_bdevs": 2, 00:15:51.561 "num_base_bdevs_discovered": 1, 00:15:51.561 "num_base_bdevs_operational": 1, 00:15:51.561 "base_bdevs_list": [ 00:15:51.561 { 00:15:51.561 "name": null, 00:15:51.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.561 "is_configured": false, 00:15:51.561 "data_offset": 0, 00:15:51.561 "data_size": 7936 00:15:51.561 }, 00:15:51.561 { 00:15:51.561 "name": "BaseBdev2", 00:15:51.561 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:51.561 "is_configured": true, 00:15:51.561 "data_offset": 256, 00:15:51.561 "data_size": 7936 00:15:51.561 } 00:15:51.561 ] 00:15:51.561 }' 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.561 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.821 "name": "raid_bdev1", 00:15:51.821 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:51.821 "strip_size_kb": 0, 00:15:51.821 "state": "online", 00:15:51.821 "raid_level": "raid1", 00:15:51.821 "superblock": true, 00:15:51.821 "num_base_bdevs": 2, 00:15:51.821 "num_base_bdevs_discovered": 1, 00:15:51.821 "num_base_bdevs_operational": 1, 00:15:51.821 "base_bdevs_list": [ 00:15:51.821 { 00:15:51.821 "name": null, 00:15:51.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.821 "is_configured": false, 00:15:51.821 "data_offset": 0, 00:15:51.821 "data_size": 7936 00:15:51.821 }, 00:15:51.821 { 00:15:51.821 "name": "BaseBdev2", 00:15:51.821 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:51.821 "is_configured": true, 00:15:51.821 "data_offset": 256, 00:15:51.821 "data_size": 7936 00:15:51.821 } 00:15:51.821 ] 00:15:51.821 }' 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.821 [2024-11-26 12:59:09.452249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.821 [2024-11-26 12:59:09.452415] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:51.821 [2024-11-26 12:59:09.452465] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:51.821 request: 00:15:51.821 { 00:15:51.821 "base_bdev": "BaseBdev1", 00:15:51.821 "raid_bdev": "raid_bdev1", 00:15:51.821 "method": "bdev_raid_add_base_bdev", 00:15:51.821 "req_id": 1 00:15:51.821 } 00:15:51.821 Got JSON-RPC error response 00:15:51.821 response: 00:15:51.821 { 00:15:51.821 "code": -22, 00:15:51.821 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:51.821 } 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:51.821 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.202 "name": "raid_bdev1", 00:15:53.202 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:53.202 "strip_size_kb": 0, 00:15:53.202 "state": "online", 00:15:53.202 "raid_level": "raid1", 00:15:53.202 "superblock": true, 00:15:53.202 "num_base_bdevs": 2, 00:15:53.202 "num_base_bdevs_discovered": 1, 00:15:53.202 "num_base_bdevs_operational": 1, 00:15:53.202 "base_bdevs_list": [ 00:15:53.202 { 00:15:53.202 "name": null, 00:15:53.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.202 "is_configured": false, 00:15:53.202 "data_offset": 0, 00:15:53.202 "data_size": 7936 00:15:53.202 }, 00:15:53.202 { 00:15:53.202 "name": "BaseBdev2", 00:15:53.202 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:53.202 "is_configured": true, 00:15:53.202 "data_offset": 256, 00:15:53.202 "data_size": 7936 00:15:53.202 } 00:15:53.202 ] 00:15:53.202 }' 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.202 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.463 "name": "raid_bdev1", 00:15:53.463 "uuid": "9dc02478-f302-41f2-a127-c636dba14b2b", 00:15:53.463 "strip_size_kb": 0, 00:15:53.463 "state": "online", 00:15:53.463 "raid_level": "raid1", 00:15:53.463 "superblock": true, 00:15:53.463 "num_base_bdevs": 2, 00:15:53.463 "num_base_bdevs_discovered": 1, 00:15:53.463 "num_base_bdevs_operational": 1, 00:15:53.463 "base_bdevs_list": [ 00:15:53.463 { 00:15:53.463 "name": null, 00:15:53.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.463 "is_configured": false, 00:15:53.463 "data_offset": 0, 00:15:53.463 "data_size": 7936 00:15:53.463 }, 00:15:53.463 { 00:15:53.463 "name": "BaseBdev2", 00:15:53.463 "uuid": "5272ef6f-5523-5359-b08a-2fb54ca25191", 00:15:53.463 "is_configured": true, 00:15:53.463 "data_offset": 256, 00:15:53.463 "data_size": 7936 00:15:53.463 } 00:15:53.463 ] 00:15:53.463 }' 00:15:53.463 12:59:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 97090 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 97090 ']' 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 97090 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97090 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97090' 00:15:53.463 killing process with pid 97090 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 97090 00:15:53.463 Received shutdown signal, test time was about 60.000000 seconds 00:15:53.463 00:15:53.463 Latency(us) 00:15:53.463 [2024-11-26T12:59:11.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.463 [2024-11-26T12:59:11.147Z] =================================================================================================================== 00:15:53.463 [2024-11-26T12:59:11.147Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:53.463 [2024-11-26 12:59:11.101403] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.463 [2024-11-26 12:59:11.101503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.463 [2024-11-26 12:59:11.101545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.463 [2024-11-26 12:59:11.101553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:53.463 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 97090 00:15:53.463 [2024-11-26 12:59:11.132934] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.725 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:53.725 00:15:53.725 real 0m18.568s 00:15:53.725 user 0m24.626s 00:15:53.725 sys 0m2.670s 00:15:53.725 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.725 ************************************ 00:15:53.725 END TEST raid_rebuild_test_sb_4k 00:15:53.725 ************************************ 00:15:53.725 12:59:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.985 12:59:11 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:53.985 12:59:11 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:53.985 12:59:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:53.985 12:59:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.985 12:59:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.985 ************************************ 00:15:53.985 START TEST raid_state_function_test_sb_md_separate 00:15:53.985 ************************************ 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97768 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97768' 00:15:53.985 Process raid pid: 97768 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97768 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97768 ']' 00:15:53.985 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.986 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.986 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.986 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.986 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.986 [2024-11-26 12:59:11.539118] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:53.986 [2024-11-26 12:59:11.539341] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.246 [2024-11-26 12:59:11.705127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.246 [2024-11-26 12:59:11.752335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.246 [2024-11-26 12:59:11.795102] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.246 [2024-11-26 12:59:11.795139] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.816 [2024-11-26 12:59:12.360929] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.816 [2024-11-26 12:59:12.360992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.816 [2024-11-26 12:59:12.361003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.816 [2024-11-26 12:59:12.361012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.816 "name": "Existed_Raid", 00:15:54.816 "uuid": "f3cae6c2-7669-48c1-a04e-e02a65b31b17", 00:15:54.816 "strip_size_kb": 0, 00:15:54.816 "state": "configuring", 00:15:54.816 "raid_level": "raid1", 00:15:54.816 "superblock": true, 00:15:54.816 "num_base_bdevs": 2, 00:15:54.816 "num_base_bdevs_discovered": 0, 00:15:54.816 "num_base_bdevs_operational": 2, 00:15:54.816 "base_bdevs_list": [ 00:15:54.816 { 00:15:54.816 "name": "BaseBdev1", 00:15:54.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.816 "is_configured": false, 00:15:54.816 "data_offset": 0, 00:15:54.816 "data_size": 0 00:15:54.816 }, 00:15:54.816 { 00:15:54.816 "name": "BaseBdev2", 00:15:54.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.816 "is_configured": false, 00:15:54.816 "data_offset": 0, 00:15:54.816 "data_size": 0 00:15:54.816 } 00:15:54.816 ] 00:15:54.816 }' 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.816 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.386 [2024-11-26 12:59:12.768208] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.386 [2024-11-26 12:59:12.768307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.386 [2024-11-26 12:59:12.780214] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.386 [2024-11-26 12:59:12.780296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.386 [2024-11-26 12:59:12.780321] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.386 [2024-11-26 12:59:12.780342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.386 [2024-11-26 12:59:12.801563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.386 BaseBdev1 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.386 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.386 [ 00:15:55.386 { 00:15:55.386 "name": "BaseBdev1", 00:15:55.386 "aliases": [ 00:15:55.386 "34bd6a89-84d0-4e53-9742-b798b9963564" 00:15:55.386 ], 00:15:55.386 "product_name": "Malloc disk", 00:15:55.386 "block_size": 4096, 00:15:55.386 "num_blocks": 8192, 00:15:55.386 "uuid": "34bd6a89-84d0-4e53-9742-b798b9963564", 00:15:55.386 "md_size": 32, 00:15:55.386 "md_interleave": false, 00:15:55.386 "dif_type": 0, 00:15:55.386 "assigned_rate_limits": { 00:15:55.386 "rw_ios_per_sec": 0, 00:15:55.386 "rw_mbytes_per_sec": 0, 00:15:55.386 "r_mbytes_per_sec": 0, 00:15:55.386 "w_mbytes_per_sec": 0 00:15:55.386 }, 00:15:55.386 "claimed": true, 00:15:55.386 "claim_type": "exclusive_write", 00:15:55.386 "zoned": false, 00:15:55.386 "supported_io_types": { 00:15:55.386 "read": true, 00:15:55.386 "write": true, 00:15:55.386 "unmap": true, 00:15:55.386 "flush": true, 00:15:55.386 "reset": true, 00:15:55.386 "nvme_admin": false, 00:15:55.386 "nvme_io": false, 00:15:55.386 "nvme_io_md": false, 00:15:55.386 "write_zeroes": true, 00:15:55.386 "zcopy": true, 00:15:55.386 "get_zone_info": false, 00:15:55.386 "zone_management": false, 00:15:55.386 "zone_append": false, 00:15:55.386 "compare": false, 00:15:55.386 "compare_and_write": false, 00:15:55.387 "abort": true, 00:15:55.387 "seek_hole": false, 00:15:55.387 "seek_data": false, 00:15:55.387 "copy": true, 00:15:55.387 "nvme_iov_md": false 00:15:55.387 }, 00:15:55.387 "memory_domains": [ 00:15:55.387 { 00:15:55.387 "dma_device_id": "system", 00:15:55.387 "dma_device_type": 1 00:15:55.387 }, 00:15:55.387 { 00:15:55.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.387 "dma_device_type": 2 00:15:55.387 } 00:15:55.387 ], 00:15:55.387 "driver_specific": {} 00:15:55.387 } 00:15:55.387 ] 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.387 "name": "Existed_Raid", 00:15:55.387 "uuid": "8ece3500-5f45-42d7-b2fe-de4c0434f095", 00:15:55.387 "strip_size_kb": 0, 00:15:55.387 "state": "configuring", 00:15:55.387 "raid_level": "raid1", 00:15:55.387 "superblock": true, 00:15:55.387 "num_base_bdevs": 2, 00:15:55.387 "num_base_bdevs_discovered": 1, 00:15:55.387 "num_base_bdevs_operational": 2, 00:15:55.387 "base_bdevs_list": [ 00:15:55.387 { 00:15:55.387 "name": "BaseBdev1", 00:15:55.387 "uuid": "34bd6a89-84d0-4e53-9742-b798b9963564", 00:15:55.387 "is_configured": true, 00:15:55.387 "data_offset": 256, 00:15:55.387 "data_size": 7936 00:15:55.387 }, 00:15:55.387 { 00:15:55.387 "name": "BaseBdev2", 00:15:55.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.387 "is_configured": false, 00:15:55.387 "data_offset": 0, 00:15:55.387 "data_size": 0 00:15:55.387 } 00:15:55.387 ] 00:15:55.387 }' 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.387 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.647 [2024-11-26 12:59:13.280802] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.647 [2024-11-26 12:59:13.280838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.647 [2024-11-26 12:59:13.292839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.647 [2024-11-26 12:59:13.294617] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.647 [2024-11-26 12:59:13.294705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.647 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.907 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.907 "name": "Existed_Raid", 00:15:55.907 "uuid": "0d43c603-ffcd-482b-ac50-4db9972af27e", 00:15:55.907 "strip_size_kb": 0, 00:15:55.907 "state": "configuring", 00:15:55.907 "raid_level": "raid1", 00:15:55.907 "superblock": true, 00:15:55.907 "num_base_bdevs": 2, 00:15:55.907 "num_base_bdevs_discovered": 1, 00:15:55.907 "num_base_bdevs_operational": 2, 00:15:55.907 "base_bdevs_list": [ 00:15:55.907 { 00:15:55.907 "name": "BaseBdev1", 00:15:55.907 "uuid": "34bd6a89-84d0-4e53-9742-b798b9963564", 00:15:55.907 "is_configured": true, 00:15:55.907 "data_offset": 256, 00:15:55.907 "data_size": 7936 00:15:55.907 }, 00:15:55.907 { 00:15:55.907 "name": "BaseBdev2", 00:15:55.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.907 "is_configured": false, 00:15:55.907 "data_offset": 0, 00:15:55.907 "data_size": 0 00:15:55.907 } 00:15:55.907 ] 00:15:55.907 }' 00:15:55.907 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.907 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.167 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:56.167 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.167 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.167 [2024-11-26 12:59:13.738081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.167 [2024-11-26 12:59:13.738765] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:56.167 [2024-11-26 12:59:13.738924] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:56.167 BaseBdev2 00:15:56.167 [2024-11-26 12:59:13.739323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:56.168 [2024-11-26 12:59:13.739623] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:56.168 [2024-11-26 12:59:13.739837] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:56.168 [2024-11-26 12:59:13.740280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.168 [ 00:15:56.168 { 00:15:56.168 "name": "BaseBdev2", 00:15:56.168 "aliases": [ 00:15:56.168 "55e16ca2-84b5-4eae-8c76-539c6d4ddccf" 00:15:56.168 ], 00:15:56.168 "product_name": "Malloc disk", 00:15:56.168 "block_size": 4096, 00:15:56.168 "num_blocks": 8192, 00:15:56.168 "uuid": "55e16ca2-84b5-4eae-8c76-539c6d4ddccf", 00:15:56.168 "md_size": 32, 00:15:56.168 "md_interleave": false, 00:15:56.168 "dif_type": 0, 00:15:56.168 "assigned_rate_limits": { 00:15:56.168 "rw_ios_per_sec": 0, 00:15:56.168 "rw_mbytes_per_sec": 0, 00:15:56.168 "r_mbytes_per_sec": 0, 00:15:56.168 "w_mbytes_per_sec": 0 00:15:56.168 }, 00:15:56.168 "claimed": true, 00:15:56.168 "claim_type": "exclusive_write", 00:15:56.168 "zoned": false, 00:15:56.168 "supported_io_types": { 00:15:56.168 "read": true, 00:15:56.168 "write": true, 00:15:56.168 "unmap": true, 00:15:56.168 "flush": true, 00:15:56.168 "reset": true, 00:15:56.168 "nvme_admin": false, 00:15:56.168 "nvme_io": false, 00:15:56.168 "nvme_io_md": false, 00:15:56.168 "write_zeroes": true, 00:15:56.168 "zcopy": true, 00:15:56.168 "get_zone_info": false, 00:15:56.168 "zone_management": false, 00:15:56.168 "zone_append": false, 00:15:56.168 "compare": false, 00:15:56.168 "compare_and_write": false, 00:15:56.168 "abort": true, 00:15:56.168 "seek_hole": false, 00:15:56.168 "seek_data": false, 00:15:56.168 "copy": true, 00:15:56.168 "nvme_iov_md": false 00:15:56.168 }, 00:15:56.168 "memory_domains": [ 00:15:56.168 { 00:15:56.168 "dma_device_id": "system", 00:15:56.168 "dma_device_type": 1 00:15:56.168 }, 00:15:56.168 { 00:15:56.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.168 "dma_device_type": 2 00:15:56.168 } 00:15:56.168 ], 00:15:56.168 "driver_specific": {} 00:15:56.168 } 00:15:56.168 ] 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.168 "name": "Existed_Raid", 00:15:56.168 "uuid": "0d43c603-ffcd-482b-ac50-4db9972af27e", 00:15:56.168 "strip_size_kb": 0, 00:15:56.168 "state": "online", 00:15:56.168 "raid_level": "raid1", 00:15:56.168 "superblock": true, 00:15:56.168 "num_base_bdevs": 2, 00:15:56.168 "num_base_bdevs_discovered": 2, 00:15:56.168 "num_base_bdevs_operational": 2, 00:15:56.168 "base_bdevs_list": [ 00:15:56.168 { 00:15:56.168 "name": "BaseBdev1", 00:15:56.168 "uuid": "34bd6a89-84d0-4e53-9742-b798b9963564", 00:15:56.168 "is_configured": true, 00:15:56.168 "data_offset": 256, 00:15:56.168 "data_size": 7936 00:15:56.168 }, 00:15:56.168 { 00:15:56.168 "name": "BaseBdev2", 00:15:56.168 "uuid": "55e16ca2-84b5-4eae-8c76-539c6d4ddccf", 00:15:56.168 "is_configured": true, 00:15:56.168 "data_offset": 256, 00:15:56.168 "data_size": 7936 00:15:56.168 } 00:15:56.168 ] 00:15:56.168 }' 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.168 12:59:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.738 [2024-11-26 12:59:14.217516] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.738 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.738 "name": "Existed_Raid", 00:15:56.738 "aliases": [ 00:15:56.738 "0d43c603-ffcd-482b-ac50-4db9972af27e" 00:15:56.738 ], 00:15:56.738 "product_name": "Raid Volume", 00:15:56.738 "block_size": 4096, 00:15:56.738 "num_blocks": 7936, 00:15:56.738 "uuid": "0d43c603-ffcd-482b-ac50-4db9972af27e", 00:15:56.738 "md_size": 32, 00:15:56.738 "md_interleave": false, 00:15:56.738 "dif_type": 0, 00:15:56.738 "assigned_rate_limits": { 00:15:56.738 "rw_ios_per_sec": 0, 00:15:56.738 "rw_mbytes_per_sec": 0, 00:15:56.738 "r_mbytes_per_sec": 0, 00:15:56.738 "w_mbytes_per_sec": 0 00:15:56.738 }, 00:15:56.738 "claimed": false, 00:15:56.738 "zoned": false, 00:15:56.738 "supported_io_types": { 00:15:56.738 "read": true, 00:15:56.738 "write": true, 00:15:56.738 "unmap": false, 00:15:56.738 "flush": false, 00:15:56.738 "reset": true, 00:15:56.738 "nvme_admin": false, 00:15:56.738 "nvme_io": false, 00:15:56.738 "nvme_io_md": false, 00:15:56.738 "write_zeroes": true, 00:15:56.738 "zcopy": false, 00:15:56.738 "get_zone_info": false, 00:15:56.738 "zone_management": false, 00:15:56.738 "zone_append": false, 00:15:56.738 "compare": false, 00:15:56.738 "compare_and_write": false, 00:15:56.738 "abort": false, 00:15:56.738 "seek_hole": false, 00:15:56.738 "seek_data": false, 00:15:56.738 "copy": false, 00:15:56.738 "nvme_iov_md": false 00:15:56.738 }, 00:15:56.738 "memory_domains": [ 00:15:56.738 { 00:15:56.738 "dma_device_id": "system", 00:15:56.738 "dma_device_type": 1 00:15:56.738 }, 00:15:56.738 { 00:15:56.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.738 "dma_device_type": 2 00:15:56.738 }, 00:15:56.738 { 00:15:56.738 "dma_device_id": "system", 00:15:56.738 "dma_device_type": 1 00:15:56.738 }, 00:15:56.738 { 00:15:56.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.738 "dma_device_type": 2 00:15:56.738 } 00:15:56.738 ], 00:15:56.738 "driver_specific": { 00:15:56.738 "raid": { 00:15:56.738 "uuid": "0d43c603-ffcd-482b-ac50-4db9972af27e", 00:15:56.738 "strip_size_kb": 0, 00:15:56.738 "state": "online", 00:15:56.738 "raid_level": "raid1", 00:15:56.738 "superblock": true, 00:15:56.738 "num_base_bdevs": 2, 00:15:56.738 "num_base_bdevs_discovered": 2, 00:15:56.738 "num_base_bdevs_operational": 2, 00:15:56.739 "base_bdevs_list": [ 00:15:56.739 { 00:15:56.739 "name": "BaseBdev1", 00:15:56.739 "uuid": "34bd6a89-84d0-4e53-9742-b798b9963564", 00:15:56.739 "is_configured": true, 00:15:56.739 "data_offset": 256, 00:15:56.739 "data_size": 7936 00:15:56.739 }, 00:15:56.739 { 00:15:56.739 "name": "BaseBdev2", 00:15:56.739 "uuid": "55e16ca2-84b5-4eae-8c76-539c6d4ddccf", 00:15:56.739 "is_configured": true, 00:15:56.739 "data_offset": 256, 00:15:56.739 "data_size": 7936 00:15:56.739 } 00:15:56.739 ] 00:15:56.739 } 00:15:56.739 } 00:15:56.739 }' 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:56.739 BaseBdev2' 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.739 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.000 [2024-11-26 12:59:14.456952] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.000 "name": "Existed_Raid", 00:15:57.000 "uuid": "0d43c603-ffcd-482b-ac50-4db9972af27e", 00:15:57.000 "strip_size_kb": 0, 00:15:57.000 "state": "online", 00:15:57.000 "raid_level": "raid1", 00:15:57.000 "superblock": true, 00:15:57.000 "num_base_bdevs": 2, 00:15:57.000 "num_base_bdevs_discovered": 1, 00:15:57.000 "num_base_bdevs_operational": 1, 00:15:57.000 "base_bdevs_list": [ 00:15:57.000 { 00:15:57.000 "name": null, 00:15:57.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.000 "is_configured": false, 00:15:57.000 "data_offset": 0, 00:15:57.000 "data_size": 7936 00:15:57.000 }, 00:15:57.000 { 00:15:57.000 "name": "BaseBdev2", 00:15:57.000 "uuid": "55e16ca2-84b5-4eae-8c76-539c6d4ddccf", 00:15:57.000 "is_configured": true, 00:15:57.000 "data_offset": 256, 00:15:57.000 "data_size": 7936 00:15:57.000 } 00:15:57.000 ] 00:15:57.000 }' 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.000 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.260 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.261 [2024-11-26 12:59:14.915998] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:57.261 [2024-11-26 12:59:14.916163] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.261 [2024-11-26 12:59:14.928636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.261 [2024-11-26 12:59:14.928768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.261 [2024-11-26 12:59:14.928811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:57.261 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.521 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.521 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:57.521 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:57.521 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:57.521 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97768 00:15:57.521 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97768 ']' 00:15:57.521 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97768 00:15:57.521 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:57.521 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.521 12:59:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97768 00:15:57.521 12:59:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.521 killing process with pid 97768 00:15:57.521 12:59:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.521 12:59:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97768' 00:15:57.521 12:59:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97768 00:15:57.521 [2024-11-26 12:59:15.030617] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.521 12:59:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97768 00:15:57.521 [2024-11-26 12:59:15.031623] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.781 12:59:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:57.781 00:15:57.781 real 0m3.842s 00:15:57.781 user 0m5.951s 00:15:57.781 sys 0m0.871s 00:15:57.781 12:59:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.781 ************************************ 00:15:57.781 END TEST raid_state_function_test_sb_md_separate 00:15:57.781 ************************************ 00:15:57.781 12:59:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.781 12:59:15 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:57.781 12:59:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:57.781 12:59:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.781 12:59:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.781 ************************************ 00:15:57.781 START TEST raid_superblock_test_md_separate 00:15:57.781 ************************************ 00:15:57.781 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:57.781 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:57.781 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:57.781 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:57.781 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:57.781 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:57.781 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=98005 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 98005 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98005 ']' 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.782 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.042 [2024-11-26 12:59:15.465349] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:58.042 [2024-11-26 12:59:15.465600] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98005 ] 00:15:58.042 [2024-11-26 12:59:15.630405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.042 [2024-11-26 12:59:15.678009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.302 [2024-11-26 12:59:15.721426] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.303 [2024-11-26 12:59:15.721542] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.873 malloc1 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.873 [2024-11-26 12:59:16.312302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.873 [2024-11-26 12:59:16.312400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.873 [2024-11-26 12:59:16.312451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.873 [2024-11-26 12:59:16.312491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.873 [2024-11-26 12:59:16.314369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.873 [2024-11-26 12:59:16.314440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.873 pt1 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.873 malloc2 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.873 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.874 [2024-11-26 12:59:16.355902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.874 [2024-11-26 12:59:16.356093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.874 [2024-11-26 12:59:16.356160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:58.874 [2024-11-26 12:59:16.356263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.874 [2024-11-26 12:59:16.359842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.874 [2024-11-26 12:59:16.359969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.874 pt2 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.874 [2024-11-26 12:59:16.368261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.874 [2024-11-26 12:59:16.370724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.874 [2024-11-26 12:59:16.370973] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:58.874 [2024-11-26 12:59:16.371041] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:58.874 [2024-11-26 12:59:16.371188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:58.874 [2024-11-26 12:59:16.371367] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:58.874 [2024-11-26 12:59:16.371425] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:58.874 [2024-11-26 12:59:16.371619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.874 "name": "raid_bdev1", 00:15:58.874 "uuid": "61850d4f-d1b4-46df-9877-fc1dd39887ed", 00:15:58.874 "strip_size_kb": 0, 00:15:58.874 "state": "online", 00:15:58.874 "raid_level": "raid1", 00:15:58.874 "superblock": true, 00:15:58.874 "num_base_bdevs": 2, 00:15:58.874 "num_base_bdevs_discovered": 2, 00:15:58.874 "num_base_bdevs_operational": 2, 00:15:58.874 "base_bdevs_list": [ 00:15:58.874 { 00:15:58.874 "name": "pt1", 00:15:58.874 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.874 "is_configured": true, 00:15:58.874 "data_offset": 256, 00:15:58.874 "data_size": 7936 00:15:58.874 }, 00:15:58.874 { 00:15:58.874 "name": "pt2", 00:15:58.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.874 "is_configured": true, 00:15:58.874 "data_offset": 256, 00:15:58.874 "data_size": 7936 00:15:58.874 } 00:15:58.874 ] 00:15:58.874 }' 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.874 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.134 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:59.134 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:59.134 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.134 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.134 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.134 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.134 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.134 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.134 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.134 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.134 [2024-11-26 12:59:16.803680] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.395 "name": "raid_bdev1", 00:15:59.395 "aliases": [ 00:15:59.395 "61850d4f-d1b4-46df-9877-fc1dd39887ed" 00:15:59.395 ], 00:15:59.395 "product_name": "Raid Volume", 00:15:59.395 "block_size": 4096, 00:15:59.395 "num_blocks": 7936, 00:15:59.395 "uuid": "61850d4f-d1b4-46df-9877-fc1dd39887ed", 00:15:59.395 "md_size": 32, 00:15:59.395 "md_interleave": false, 00:15:59.395 "dif_type": 0, 00:15:59.395 "assigned_rate_limits": { 00:15:59.395 "rw_ios_per_sec": 0, 00:15:59.395 "rw_mbytes_per_sec": 0, 00:15:59.395 "r_mbytes_per_sec": 0, 00:15:59.395 "w_mbytes_per_sec": 0 00:15:59.395 }, 00:15:59.395 "claimed": false, 00:15:59.395 "zoned": false, 00:15:59.395 "supported_io_types": { 00:15:59.395 "read": true, 00:15:59.395 "write": true, 00:15:59.395 "unmap": false, 00:15:59.395 "flush": false, 00:15:59.395 "reset": true, 00:15:59.395 "nvme_admin": false, 00:15:59.395 "nvme_io": false, 00:15:59.395 "nvme_io_md": false, 00:15:59.395 "write_zeroes": true, 00:15:59.395 "zcopy": false, 00:15:59.395 "get_zone_info": false, 00:15:59.395 "zone_management": false, 00:15:59.395 "zone_append": false, 00:15:59.395 "compare": false, 00:15:59.395 "compare_and_write": false, 00:15:59.395 "abort": false, 00:15:59.395 "seek_hole": false, 00:15:59.395 "seek_data": false, 00:15:59.395 "copy": false, 00:15:59.395 "nvme_iov_md": false 00:15:59.395 }, 00:15:59.395 "memory_domains": [ 00:15:59.395 { 00:15:59.395 "dma_device_id": "system", 00:15:59.395 "dma_device_type": 1 00:15:59.395 }, 00:15:59.395 { 00:15:59.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.395 "dma_device_type": 2 00:15:59.395 }, 00:15:59.395 { 00:15:59.395 "dma_device_id": "system", 00:15:59.395 "dma_device_type": 1 00:15:59.395 }, 00:15:59.395 { 00:15:59.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.395 "dma_device_type": 2 00:15:59.395 } 00:15:59.395 ], 00:15:59.395 "driver_specific": { 00:15:59.395 "raid": { 00:15:59.395 "uuid": "61850d4f-d1b4-46df-9877-fc1dd39887ed", 00:15:59.395 "strip_size_kb": 0, 00:15:59.395 "state": "online", 00:15:59.395 "raid_level": "raid1", 00:15:59.395 "superblock": true, 00:15:59.395 "num_base_bdevs": 2, 00:15:59.395 "num_base_bdevs_discovered": 2, 00:15:59.395 "num_base_bdevs_operational": 2, 00:15:59.395 "base_bdevs_list": [ 00:15:59.395 { 00:15:59.395 "name": "pt1", 00:15:59.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.395 "is_configured": true, 00:15:59.395 "data_offset": 256, 00:15:59.395 "data_size": 7936 00:15:59.395 }, 00:15:59.395 { 00:15:59.395 "name": "pt2", 00:15:59.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.395 "is_configured": true, 00:15:59.395 "data_offset": 256, 00:15:59.395 "data_size": 7936 00:15:59.395 } 00:15:59.395 ] 00:15:59.395 } 00:15:59.395 } 00:15:59.395 }' 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:59.395 pt2' 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.395 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.395 [2024-11-26 12:59:17.019234] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=61850d4f-d1b4-46df-9877-fc1dd39887ed 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 61850d4f-d1b4-46df-9877-fc1dd39887ed ']' 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.395 [2024-11-26 12:59:17.062944] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.395 [2024-11-26 12:59:17.063009] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.395 [2024-11-26 12:59:17.063093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.395 [2024-11-26 12:59:17.063164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.395 [2024-11-26 12:59:17.063234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:59.395 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.656 [2024-11-26 12:59:17.198718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:59.656 [2024-11-26 12:59:17.200555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:59.656 [2024-11-26 12:59:17.200652] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:59.656 [2024-11-26 12:59:17.200745] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:59.656 [2024-11-26 12:59:17.200798] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.656 [2024-11-26 12:59:17.200832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:59.656 request: 00:15:59.656 { 00:15:59.656 "name": "raid_bdev1", 00:15:59.656 "raid_level": "raid1", 00:15:59.656 "base_bdevs": [ 00:15:59.656 "malloc1", 00:15:59.656 "malloc2" 00:15:59.656 ], 00:15:59.656 "superblock": false, 00:15:59.656 "method": "bdev_raid_create", 00:15:59.656 "req_id": 1 00:15:59.656 } 00:15:59.656 Got JSON-RPC error response 00:15:59.656 response: 00:15:59.656 { 00:15:59.656 "code": -17, 00:15:59.656 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:59.656 } 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:59.656 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.657 [2024-11-26 12:59:17.266565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.657 [2024-11-26 12:59:17.266647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.657 [2024-11-26 12:59:17.266677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:59.657 [2024-11-26 12:59:17.266702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.657 [2024-11-26 12:59:17.268551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.657 [2024-11-26 12:59:17.268620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.657 [2024-11-26 12:59:17.268680] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:59.657 [2024-11-26 12:59:17.268740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.657 pt1 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.657 "name": "raid_bdev1", 00:15:59.657 "uuid": "61850d4f-d1b4-46df-9877-fc1dd39887ed", 00:15:59.657 "strip_size_kb": 0, 00:15:59.657 "state": "configuring", 00:15:59.657 "raid_level": "raid1", 00:15:59.657 "superblock": true, 00:15:59.657 "num_base_bdevs": 2, 00:15:59.657 "num_base_bdevs_discovered": 1, 00:15:59.657 "num_base_bdevs_operational": 2, 00:15:59.657 "base_bdevs_list": [ 00:15:59.657 { 00:15:59.657 "name": "pt1", 00:15:59.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.657 "is_configured": true, 00:15:59.657 "data_offset": 256, 00:15:59.657 "data_size": 7936 00:15:59.657 }, 00:15:59.657 { 00:15:59.657 "name": null, 00:15:59.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.657 "is_configured": false, 00:15:59.657 "data_offset": 256, 00:15:59.657 "data_size": 7936 00:15:59.657 } 00:15:59.657 ] 00:15:59.657 }' 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.657 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.227 [2024-11-26 12:59:17.749764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.227 [2024-11-26 12:59:17.749812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.227 [2024-11-26 12:59:17.749828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:00.227 [2024-11-26 12:59:17.749836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.227 [2024-11-26 12:59:17.749957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.227 [2024-11-26 12:59:17.749968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.227 [2024-11-26 12:59:17.750001] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:00.227 [2024-11-26 12:59:17.750015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.227 [2024-11-26 12:59:17.750079] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:00.227 [2024-11-26 12:59:17.750085] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:00.227 [2024-11-26 12:59:17.750141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:00.227 [2024-11-26 12:59:17.750226] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:00.227 [2024-11-26 12:59:17.750239] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:00.227 [2024-11-26 12:59:17.750290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.227 pt2 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.227 "name": "raid_bdev1", 00:16:00.227 "uuid": "61850d4f-d1b4-46df-9877-fc1dd39887ed", 00:16:00.227 "strip_size_kb": 0, 00:16:00.227 "state": "online", 00:16:00.227 "raid_level": "raid1", 00:16:00.227 "superblock": true, 00:16:00.227 "num_base_bdevs": 2, 00:16:00.227 "num_base_bdevs_discovered": 2, 00:16:00.227 "num_base_bdevs_operational": 2, 00:16:00.227 "base_bdevs_list": [ 00:16:00.227 { 00:16:00.227 "name": "pt1", 00:16:00.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.227 "is_configured": true, 00:16:00.227 "data_offset": 256, 00:16:00.227 "data_size": 7936 00:16:00.227 }, 00:16:00.227 { 00:16:00.227 "name": "pt2", 00:16:00.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.227 "is_configured": true, 00:16:00.227 "data_offset": 256, 00:16:00.227 "data_size": 7936 00:16:00.227 } 00:16:00.227 ] 00:16:00.227 }' 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.227 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.797 [2024-11-26 12:59:18.217240] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.797 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.797 "name": "raid_bdev1", 00:16:00.797 "aliases": [ 00:16:00.797 "61850d4f-d1b4-46df-9877-fc1dd39887ed" 00:16:00.797 ], 00:16:00.797 "product_name": "Raid Volume", 00:16:00.797 "block_size": 4096, 00:16:00.797 "num_blocks": 7936, 00:16:00.797 "uuid": "61850d4f-d1b4-46df-9877-fc1dd39887ed", 00:16:00.797 "md_size": 32, 00:16:00.797 "md_interleave": false, 00:16:00.797 "dif_type": 0, 00:16:00.797 "assigned_rate_limits": { 00:16:00.797 "rw_ios_per_sec": 0, 00:16:00.797 "rw_mbytes_per_sec": 0, 00:16:00.797 "r_mbytes_per_sec": 0, 00:16:00.797 "w_mbytes_per_sec": 0 00:16:00.797 }, 00:16:00.797 "claimed": false, 00:16:00.797 "zoned": false, 00:16:00.797 "supported_io_types": { 00:16:00.797 "read": true, 00:16:00.797 "write": true, 00:16:00.797 "unmap": false, 00:16:00.797 "flush": false, 00:16:00.797 "reset": true, 00:16:00.797 "nvme_admin": false, 00:16:00.797 "nvme_io": false, 00:16:00.797 "nvme_io_md": false, 00:16:00.797 "write_zeroes": true, 00:16:00.797 "zcopy": false, 00:16:00.797 "get_zone_info": false, 00:16:00.797 "zone_management": false, 00:16:00.797 "zone_append": false, 00:16:00.797 "compare": false, 00:16:00.797 "compare_and_write": false, 00:16:00.797 "abort": false, 00:16:00.797 "seek_hole": false, 00:16:00.797 "seek_data": false, 00:16:00.797 "copy": false, 00:16:00.797 "nvme_iov_md": false 00:16:00.797 }, 00:16:00.797 "memory_domains": [ 00:16:00.797 { 00:16:00.797 "dma_device_id": "system", 00:16:00.797 "dma_device_type": 1 00:16:00.797 }, 00:16:00.797 { 00:16:00.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.797 "dma_device_type": 2 00:16:00.797 }, 00:16:00.797 { 00:16:00.797 "dma_device_id": "system", 00:16:00.797 "dma_device_type": 1 00:16:00.797 }, 00:16:00.797 { 00:16:00.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.797 "dma_device_type": 2 00:16:00.797 } 00:16:00.797 ], 00:16:00.797 "driver_specific": { 00:16:00.797 "raid": { 00:16:00.797 "uuid": "61850d4f-d1b4-46df-9877-fc1dd39887ed", 00:16:00.797 "strip_size_kb": 0, 00:16:00.797 "state": "online", 00:16:00.797 "raid_level": "raid1", 00:16:00.797 "superblock": true, 00:16:00.797 "num_base_bdevs": 2, 00:16:00.797 "num_base_bdevs_discovered": 2, 00:16:00.797 "num_base_bdevs_operational": 2, 00:16:00.797 "base_bdevs_list": [ 00:16:00.797 { 00:16:00.797 "name": "pt1", 00:16:00.797 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.797 "is_configured": true, 00:16:00.798 "data_offset": 256, 00:16:00.798 "data_size": 7936 00:16:00.798 }, 00:16:00.798 { 00:16:00.798 "name": "pt2", 00:16:00.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.798 "is_configured": true, 00:16:00.798 "data_offset": 256, 00:16:00.798 "data_size": 7936 00:16:00.798 } 00:16:00.798 ] 00:16:00.798 } 00:16:00.798 } 00:16:00.798 }' 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:00.798 pt2' 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.798 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.798 [2024-11-26 12:59:18.460777] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 61850d4f-d1b4-46df-9877-fc1dd39887ed '!=' 61850d4f-d1b4-46df-9877-fc1dd39887ed ']' 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.058 [2024-11-26 12:59:18.508496] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.058 "name": "raid_bdev1", 00:16:01.058 "uuid": "61850d4f-d1b4-46df-9877-fc1dd39887ed", 00:16:01.058 "strip_size_kb": 0, 00:16:01.058 "state": "online", 00:16:01.058 "raid_level": "raid1", 00:16:01.058 "superblock": true, 00:16:01.058 "num_base_bdevs": 2, 00:16:01.058 "num_base_bdevs_discovered": 1, 00:16:01.058 "num_base_bdevs_operational": 1, 00:16:01.058 "base_bdevs_list": [ 00:16:01.058 { 00:16:01.058 "name": null, 00:16:01.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.058 "is_configured": false, 00:16:01.058 "data_offset": 0, 00:16:01.058 "data_size": 7936 00:16:01.058 }, 00:16:01.058 { 00:16:01.058 "name": "pt2", 00:16:01.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.058 "is_configured": true, 00:16:01.058 "data_offset": 256, 00:16:01.058 "data_size": 7936 00:16:01.058 } 00:16:01.058 ] 00:16:01.058 }' 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.058 12:59:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.629 [2024-11-26 12:59:19.011726] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.629 [2024-11-26 12:59:19.011813] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.629 [2024-11-26 12:59:19.011875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.629 [2024-11-26 12:59:19.011925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.629 [2024-11-26 12:59:19.011972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.629 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.629 [2024-11-26 12:59:19.083614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.629 [2024-11-26 12:59:19.083709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.629 [2024-11-26 12:59:19.083741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:01.629 [2024-11-26 12:59:19.083775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.629 [2024-11-26 12:59:19.085644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.629 [2024-11-26 12:59:19.085726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.629 [2024-11-26 12:59:19.085790] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:01.629 [2024-11-26 12:59:19.085830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.629 [2024-11-26 12:59:19.085908] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:01.630 [2024-11-26 12:59:19.085917] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:01.630 [2024-11-26 12:59:19.085981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:01.630 [2024-11-26 12:59:19.086050] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:01.630 [2024-11-26 12:59:19.086058] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:01.630 [2024-11-26 12:59:19.086112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.630 pt2 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.630 "name": "raid_bdev1", 00:16:01.630 "uuid": "61850d4f-d1b4-46df-9877-fc1dd39887ed", 00:16:01.630 "strip_size_kb": 0, 00:16:01.630 "state": "online", 00:16:01.630 "raid_level": "raid1", 00:16:01.630 "superblock": true, 00:16:01.630 "num_base_bdevs": 2, 00:16:01.630 "num_base_bdevs_discovered": 1, 00:16:01.630 "num_base_bdevs_operational": 1, 00:16:01.630 "base_bdevs_list": [ 00:16:01.630 { 00:16:01.630 "name": null, 00:16:01.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.630 "is_configured": false, 00:16:01.630 "data_offset": 256, 00:16:01.630 "data_size": 7936 00:16:01.630 }, 00:16:01.630 { 00:16:01.630 "name": "pt2", 00:16:01.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.630 "is_configured": true, 00:16:01.630 "data_offset": 256, 00:16:01.630 "data_size": 7936 00:16:01.630 } 00:16:01.630 ] 00:16:01.630 }' 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.630 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.889 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.889 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.889 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.889 [2024-11-26 12:59:19.518878] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.889 [2024-11-26 12:59:19.518898] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.889 [2024-11-26 12:59:19.518940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.889 [2024-11-26 12:59:19.518970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.889 [2024-11-26 12:59:19.518979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:01.889 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.889 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.889 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:01.889 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.889 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.889 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.149 [2024-11-26 12:59:19.578769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.149 [2024-11-26 12:59:19.578872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.149 [2024-11-26 12:59:19.578904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:02.149 [2024-11-26 12:59:19.578935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.149 [2024-11-26 12:59:19.580811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.149 [2024-11-26 12:59:19.580884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.149 [2024-11-26 12:59:19.580942] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:02.149 [2024-11-26 12:59:19.580979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.149 [2024-11-26 12:59:19.581068] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:02.149 [2024-11-26 12:59:19.581080] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.149 [2024-11-26 12:59:19.581092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:02.149 [2024-11-26 12:59:19.581124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.149 [2024-11-26 12:59:19.581193] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:02.149 [2024-11-26 12:59:19.581205] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:02.149 [2024-11-26 12:59:19.581265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:02.149 [2024-11-26 12:59:19.581331] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:02.149 [2024-11-26 12:59:19.581338] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:02.149 [2024-11-26 12:59:19.581404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.149 pt1 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.149 "name": "raid_bdev1", 00:16:02.149 "uuid": "61850d4f-d1b4-46df-9877-fc1dd39887ed", 00:16:02.149 "strip_size_kb": 0, 00:16:02.149 "state": "online", 00:16:02.149 "raid_level": "raid1", 00:16:02.149 "superblock": true, 00:16:02.149 "num_base_bdevs": 2, 00:16:02.149 "num_base_bdevs_discovered": 1, 00:16:02.149 "num_base_bdevs_operational": 1, 00:16:02.149 "base_bdevs_list": [ 00:16:02.149 { 00:16:02.149 "name": null, 00:16:02.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.149 "is_configured": false, 00:16:02.149 "data_offset": 256, 00:16:02.149 "data_size": 7936 00:16:02.149 }, 00:16:02.149 { 00:16:02.149 "name": "pt2", 00:16:02.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.149 "is_configured": true, 00:16:02.149 "data_offset": 256, 00:16:02.149 "data_size": 7936 00:16:02.149 } 00:16:02.149 ] 00:16:02.149 }' 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.149 12:59:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.409 12:59:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:02.409 12:59:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:02.409 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.409 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.409 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.669 [2024-11-26 12:59:20.102110] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 61850d4f-d1b4-46df-9877-fc1dd39887ed '!=' 61850d4f-d1b4-46df-9877-fc1dd39887ed ']' 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 98005 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98005 ']' 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 98005 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98005 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:02.669 killing process with pid 98005 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98005' 00:16:02.669 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 98005 00:16:02.669 [2024-11-26 12:59:20.172884] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.669 [2024-11-26 12:59:20.172936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.669 [2024-11-26 12:59:20.172970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.670 [2024-11-26 12:59:20.172977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:02.670 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 98005 00:16:02.670 [2024-11-26 12:59:20.197062] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.931 12:59:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:02.931 00:16:02.931 real 0m5.087s 00:16:02.931 user 0m8.236s 00:16:02.931 sys 0m1.165s 00:16:02.931 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.931 12:59:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.931 ************************************ 00:16:02.931 END TEST raid_superblock_test_md_separate 00:16:02.931 ************************************ 00:16:02.931 12:59:20 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:02.931 12:59:20 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:02.931 12:59:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:02.931 12:59:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.931 12:59:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.931 ************************************ 00:16:02.931 START TEST raid_rebuild_test_sb_md_separate 00:16:02.931 ************************************ 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98322 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98322 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98322 ']' 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.931 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.192 [2024-11-26 12:59:20.638817] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:03.192 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:03.192 Zero copy mechanism will not be used. 00:16:03.192 [2024-11-26 12:59:20.639011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98322 ] 00:16:03.192 [2024-11-26 12:59:20.806037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.192 [2024-11-26 12:59:20.853885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.452 [2024-11-26 12:59:20.896827] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.452 [2024-11-26 12:59:20.896867] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.022 BaseBdev1_malloc 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.022 [2024-11-26 12:59:21.479506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:04.022 [2024-11-26 12:59:21.479604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.022 [2024-11-26 12:59:21.479648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:04.022 [2024-11-26 12:59:21.479656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.022 [2024-11-26 12:59:21.481545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.022 [2024-11-26 12:59:21.481581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.022 BaseBdev1 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.022 BaseBdev2_malloc 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.022 [2024-11-26 12:59:21.516144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:04.022 [2024-11-26 12:59:21.516213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.022 [2024-11-26 12:59:21.516237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:04.022 [2024-11-26 12:59:21.516247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.022 [2024-11-26 12:59:21.518269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.022 [2024-11-26 12:59:21.518356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:04.022 BaseBdev2 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.022 spare_malloc 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.022 spare_delay 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.022 [2024-11-26 12:59:21.557233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:04.022 [2024-11-26 12:59:21.557339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.022 [2024-11-26 12:59:21.557365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:04.022 [2024-11-26 12:59:21.557377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.022 [2024-11-26 12:59:21.559201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.022 [2024-11-26 12:59:21.559235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:04.022 spare 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.022 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.022 [2024-11-26 12:59:21.569233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.022 [2024-11-26 12:59:21.570896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.022 [2024-11-26 12:59:21.571056] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:04.022 [2024-11-26 12:59:21.571068] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:04.022 [2024-11-26 12:59:21.571141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:04.022 [2024-11-26 12:59:21.571253] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:04.022 [2024-11-26 12:59:21.571265] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:04.023 [2024-11-26 12:59:21.571362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.023 "name": "raid_bdev1", 00:16:04.023 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:04.023 "strip_size_kb": 0, 00:16:04.023 "state": "online", 00:16:04.023 "raid_level": "raid1", 00:16:04.023 "superblock": true, 00:16:04.023 "num_base_bdevs": 2, 00:16:04.023 "num_base_bdevs_discovered": 2, 00:16:04.023 "num_base_bdevs_operational": 2, 00:16:04.023 "base_bdevs_list": [ 00:16:04.023 { 00:16:04.023 "name": "BaseBdev1", 00:16:04.023 "uuid": "e0529493-2273-5ce4-90ff-5cf8583574ec", 00:16:04.023 "is_configured": true, 00:16:04.023 "data_offset": 256, 00:16:04.023 "data_size": 7936 00:16:04.023 }, 00:16:04.023 { 00:16:04.023 "name": "BaseBdev2", 00:16:04.023 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:04.023 "is_configured": true, 00:16:04.023 "data_offset": 256, 00:16:04.023 "data_size": 7936 00:16:04.023 } 00:16:04.023 ] 00:16:04.023 }' 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.023 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.592 [2024-11-26 12:59:22.048600] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.592 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.593 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:04.852 [2024-11-26 12:59:22.307929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:04.852 /dev/nbd0 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.852 1+0 records in 00:16:04.852 1+0 records out 00:16:04.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520169 s, 7.9 MB/s 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:04.852 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:05.422 7936+0 records in 00:16:05.422 7936+0 records out 00:16:05.422 32505856 bytes (33 MB, 31 MiB) copied, 0.589468 s, 55.1 MB/s 00:16:05.422 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:05.422 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.422 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:05.422 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.422 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:05.422 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.422 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:05.681 [2024-11-26 12:59:23.188150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.681 [2024-11-26 12:59:23.217517] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.681 "name": "raid_bdev1", 00:16:05.681 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:05.681 "strip_size_kb": 0, 00:16:05.681 "state": "online", 00:16:05.681 "raid_level": "raid1", 00:16:05.681 "superblock": true, 00:16:05.681 "num_base_bdevs": 2, 00:16:05.681 "num_base_bdevs_discovered": 1, 00:16:05.681 "num_base_bdevs_operational": 1, 00:16:05.681 "base_bdevs_list": [ 00:16:05.681 { 00:16:05.681 "name": null, 00:16:05.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.681 "is_configured": false, 00:16:05.681 "data_offset": 0, 00:16:05.681 "data_size": 7936 00:16:05.681 }, 00:16:05.681 { 00:16:05.681 "name": "BaseBdev2", 00:16:05.681 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:05.681 "is_configured": true, 00:16:05.681 "data_offset": 256, 00:16:05.681 "data_size": 7936 00:16:05.681 } 00:16:05.681 ] 00:16:05.681 }' 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.681 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.250 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:06.250 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.250 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.250 [2024-11-26 12:59:23.700696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.250 [2024-11-26 12:59:23.702446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:06.250 [2024-11-26 12:59:23.704332] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.250 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.250 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.191 "name": "raid_bdev1", 00:16:07.191 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:07.191 "strip_size_kb": 0, 00:16:07.191 "state": "online", 00:16:07.191 "raid_level": "raid1", 00:16:07.191 "superblock": true, 00:16:07.191 "num_base_bdevs": 2, 00:16:07.191 "num_base_bdevs_discovered": 2, 00:16:07.191 "num_base_bdevs_operational": 2, 00:16:07.191 "process": { 00:16:07.191 "type": "rebuild", 00:16:07.191 "target": "spare", 00:16:07.191 "progress": { 00:16:07.191 "blocks": 2560, 00:16:07.191 "percent": 32 00:16:07.191 } 00:16:07.191 }, 00:16:07.191 "base_bdevs_list": [ 00:16:07.191 { 00:16:07.191 "name": "spare", 00:16:07.191 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:07.191 "is_configured": true, 00:16:07.191 "data_offset": 256, 00:16:07.191 "data_size": 7936 00:16:07.191 }, 00:16:07.191 { 00:16:07.191 "name": "BaseBdev2", 00:16:07.191 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:07.191 "is_configured": true, 00:16:07.191 "data_offset": 256, 00:16:07.191 "data_size": 7936 00:16:07.191 } 00:16:07.191 ] 00:16:07.191 }' 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.191 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.191 [2024-11-26 12:59:24.851971] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.451 [2024-11-26 12:59:24.908911] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.451 [2024-11-26 12:59:24.908967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.451 [2024-11-26 12:59:24.908985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.451 [2024-11-26 12:59:24.908992] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.451 "name": "raid_bdev1", 00:16:07.451 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:07.451 "strip_size_kb": 0, 00:16:07.451 "state": "online", 00:16:07.451 "raid_level": "raid1", 00:16:07.451 "superblock": true, 00:16:07.451 "num_base_bdevs": 2, 00:16:07.451 "num_base_bdevs_discovered": 1, 00:16:07.451 "num_base_bdevs_operational": 1, 00:16:07.451 "base_bdevs_list": [ 00:16:07.451 { 00:16:07.451 "name": null, 00:16:07.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.451 "is_configured": false, 00:16:07.451 "data_offset": 0, 00:16:07.451 "data_size": 7936 00:16:07.451 }, 00:16:07.451 { 00:16:07.451 "name": "BaseBdev2", 00:16:07.451 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:07.451 "is_configured": true, 00:16:07.451 "data_offset": 256, 00:16:07.451 "data_size": 7936 00:16:07.451 } 00:16:07.451 ] 00:16:07.451 }' 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.451 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.711 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.711 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.711 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.970 "name": "raid_bdev1", 00:16:07.970 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:07.970 "strip_size_kb": 0, 00:16:07.970 "state": "online", 00:16:07.970 "raid_level": "raid1", 00:16:07.970 "superblock": true, 00:16:07.970 "num_base_bdevs": 2, 00:16:07.970 "num_base_bdevs_discovered": 1, 00:16:07.970 "num_base_bdevs_operational": 1, 00:16:07.970 "base_bdevs_list": [ 00:16:07.970 { 00:16:07.970 "name": null, 00:16:07.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.970 "is_configured": false, 00:16:07.970 "data_offset": 0, 00:16:07.970 "data_size": 7936 00:16:07.970 }, 00:16:07.970 { 00:16:07.970 "name": "BaseBdev2", 00:16:07.970 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:07.970 "is_configured": true, 00:16:07.970 "data_offset": 256, 00:16:07.970 "data_size": 7936 00:16:07.970 } 00:16:07.970 ] 00:16:07.970 }' 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.970 [2024-11-26 12:59:25.539077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.970 [2024-11-26 12:59:25.540495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:07.970 [2024-11-26 12:59:25.542359] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.970 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.953 "name": "raid_bdev1", 00:16:08.953 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:08.953 "strip_size_kb": 0, 00:16:08.953 "state": "online", 00:16:08.953 "raid_level": "raid1", 00:16:08.953 "superblock": true, 00:16:08.953 "num_base_bdevs": 2, 00:16:08.953 "num_base_bdevs_discovered": 2, 00:16:08.953 "num_base_bdevs_operational": 2, 00:16:08.953 "process": { 00:16:08.953 "type": "rebuild", 00:16:08.953 "target": "spare", 00:16:08.953 "progress": { 00:16:08.953 "blocks": 2560, 00:16:08.953 "percent": 32 00:16:08.953 } 00:16:08.953 }, 00:16:08.953 "base_bdevs_list": [ 00:16:08.953 { 00:16:08.953 "name": "spare", 00:16:08.953 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:08.953 "is_configured": true, 00:16:08.953 "data_offset": 256, 00:16:08.953 "data_size": 7936 00:16:08.953 }, 00:16:08.953 { 00:16:08.953 "name": "BaseBdev2", 00:16:08.953 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:08.953 "is_configured": true, 00:16:08.953 "data_offset": 256, 00:16:08.953 "data_size": 7936 00:16:08.953 } 00:16:08.953 ] 00:16:08.953 }' 00:16:08.953 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:09.214 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=590 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.214 "name": "raid_bdev1", 00:16:09.214 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:09.214 "strip_size_kb": 0, 00:16:09.214 "state": "online", 00:16:09.214 "raid_level": "raid1", 00:16:09.214 "superblock": true, 00:16:09.214 "num_base_bdevs": 2, 00:16:09.214 "num_base_bdevs_discovered": 2, 00:16:09.214 "num_base_bdevs_operational": 2, 00:16:09.214 "process": { 00:16:09.214 "type": "rebuild", 00:16:09.214 "target": "spare", 00:16:09.214 "progress": { 00:16:09.214 "blocks": 2816, 00:16:09.214 "percent": 35 00:16:09.214 } 00:16:09.214 }, 00:16:09.214 "base_bdevs_list": [ 00:16:09.214 { 00:16:09.214 "name": "spare", 00:16:09.214 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:09.214 "is_configured": true, 00:16:09.214 "data_offset": 256, 00:16:09.214 "data_size": 7936 00:16:09.214 }, 00:16:09.214 { 00:16:09.214 "name": "BaseBdev2", 00:16:09.214 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:09.214 "is_configured": true, 00:16:09.214 "data_offset": 256, 00:16:09.214 "data_size": 7936 00:16:09.214 } 00:16:09.214 ] 00:16:09.214 }' 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.214 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.154 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.154 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.154 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.154 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.154 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.154 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.154 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.154 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.154 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.154 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.414 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.414 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.414 "name": "raid_bdev1", 00:16:10.414 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:10.414 "strip_size_kb": 0, 00:16:10.414 "state": "online", 00:16:10.414 "raid_level": "raid1", 00:16:10.414 "superblock": true, 00:16:10.414 "num_base_bdevs": 2, 00:16:10.414 "num_base_bdevs_discovered": 2, 00:16:10.414 "num_base_bdevs_operational": 2, 00:16:10.414 "process": { 00:16:10.414 "type": "rebuild", 00:16:10.414 "target": "spare", 00:16:10.414 "progress": { 00:16:10.414 "blocks": 5632, 00:16:10.414 "percent": 70 00:16:10.414 } 00:16:10.414 }, 00:16:10.414 "base_bdevs_list": [ 00:16:10.414 { 00:16:10.414 "name": "spare", 00:16:10.414 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:10.414 "is_configured": true, 00:16:10.414 "data_offset": 256, 00:16:10.414 "data_size": 7936 00:16:10.414 }, 00:16:10.414 { 00:16:10.414 "name": "BaseBdev2", 00:16:10.414 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:10.414 "is_configured": true, 00:16:10.414 "data_offset": 256, 00:16:10.414 "data_size": 7936 00:16:10.414 } 00:16:10.414 ] 00:16:10.414 }' 00:16:10.414 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.414 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.414 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.414 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.414 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.984 [2024-11-26 12:59:28.652630] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:10.984 [2024-11-26 12:59:28.652767] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:10.984 [2024-11-26 12:59:28.652895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.553 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.554 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.554 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.554 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.554 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.554 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.554 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.554 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.554 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.554 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.554 "name": "raid_bdev1", 00:16:11.554 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:11.554 "strip_size_kb": 0, 00:16:11.554 "state": "online", 00:16:11.554 "raid_level": "raid1", 00:16:11.554 "superblock": true, 00:16:11.554 "num_base_bdevs": 2, 00:16:11.554 "num_base_bdevs_discovered": 2, 00:16:11.554 "num_base_bdevs_operational": 2, 00:16:11.554 "base_bdevs_list": [ 00:16:11.554 { 00:16:11.554 "name": "spare", 00:16:11.554 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:11.554 "is_configured": true, 00:16:11.554 "data_offset": 256, 00:16:11.554 "data_size": 7936 00:16:11.554 }, 00:16:11.554 { 00:16:11.554 "name": "BaseBdev2", 00:16:11.554 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:11.554 "is_configured": true, 00:16:11.554 "data_offset": 256, 00:16:11.554 "data_size": 7936 00:16:11.554 } 00:16:11.554 ] 00:16:11.554 }' 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.554 "name": "raid_bdev1", 00:16:11.554 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:11.554 "strip_size_kb": 0, 00:16:11.554 "state": "online", 00:16:11.554 "raid_level": "raid1", 00:16:11.554 "superblock": true, 00:16:11.554 "num_base_bdevs": 2, 00:16:11.554 "num_base_bdevs_discovered": 2, 00:16:11.554 "num_base_bdevs_operational": 2, 00:16:11.554 "base_bdevs_list": [ 00:16:11.554 { 00:16:11.554 "name": "spare", 00:16:11.554 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:11.554 "is_configured": true, 00:16:11.554 "data_offset": 256, 00:16:11.554 "data_size": 7936 00:16:11.554 }, 00:16:11.554 { 00:16:11.554 "name": "BaseBdev2", 00:16:11.554 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:11.554 "is_configured": true, 00:16:11.554 "data_offset": 256, 00:16:11.554 "data_size": 7936 00:16:11.554 } 00:16:11.554 ] 00:16:11.554 }' 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.554 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.814 "name": "raid_bdev1", 00:16:11.814 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:11.814 "strip_size_kb": 0, 00:16:11.814 "state": "online", 00:16:11.814 "raid_level": "raid1", 00:16:11.814 "superblock": true, 00:16:11.814 "num_base_bdevs": 2, 00:16:11.814 "num_base_bdevs_discovered": 2, 00:16:11.814 "num_base_bdevs_operational": 2, 00:16:11.814 "base_bdevs_list": [ 00:16:11.814 { 00:16:11.814 "name": "spare", 00:16:11.814 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:11.814 "is_configured": true, 00:16:11.814 "data_offset": 256, 00:16:11.814 "data_size": 7936 00:16:11.814 }, 00:16:11.814 { 00:16:11.814 "name": "BaseBdev2", 00:16:11.814 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:11.814 "is_configured": true, 00:16:11.814 "data_offset": 256, 00:16:11.814 "data_size": 7936 00:16:11.814 } 00:16:11.814 ] 00:16:11.814 }' 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.814 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.074 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:12.074 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.074 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.074 [2024-11-26 12:59:29.722567] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.074 [2024-11-26 12:59:29.722637] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.074 [2024-11-26 12:59:29.722729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.074 [2024-11-26 12:59:29.722809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.074 [2024-11-26 12:59:29.722827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:12.074 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.074 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.074 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:12.074 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.074 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.074 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:12.333 /dev/nbd0 00:16:12.333 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:12.333 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:12.333 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:12.333 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:12.333 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:12.333 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:12.333 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.593 1+0 records in 00:16:12.593 1+0 records out 00:16:12.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471487 s, 8.7 MB/s 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:12.593 /dev/nbd1 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:12.593 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:12.853 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.854 1+0 records in 00:16:12.854 1+0 records out 00:16:12.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040837 s, 10.0 MB/s 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.854 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.114 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.374 [2024-11-26 12:59:30.797991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:13.374 [2024-11-26 12:59:30.798044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.374 [2024-11-26 12:59:30.798079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:13.374 [2024-11-26 12:59:30.798090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.374 [2024-11-26 12:59:30.799936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.374 [2024-11-26 12:59:30.799975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:13.374 [2024-11-26 12:59:30.800022] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:13.374 [2024-11-26 12:59:30.800065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.374 [2024-11-26 12:59:30.800172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.374 spare 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.374 [2024-11-26 12:59:30.900076] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:13.374 [2024-11-26 12:59:30.900142] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:13.374 [2024-11-26 12:59:30.900256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:13.374 [2024-11-26 12:59:30.900429] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:13.374 [2024-11-26 12:59:30.900475] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:13.374 [2024-11-26 12:59:30.900599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.374 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.374 "name": "raid_bdev1", 00:16:13.374 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:13.374 "strip_size_kb": 0, 00:16:13.374 "state": "online", 00:16:13.374 "raid_level": "raid1", 00:16:13.374 "superblock": true, 00:16:13.374 "num_base_bdevs": 2, 00:16:13.374 "num_base_bdevs_discovered": 2, 00:16:13.374 "num_base_bdevs_operational": 2, 00:16:13.374 "base_bdevs_list": [ 00:16:13.374 { 00:16:13.374 "name": "spare", 00:16:13.374 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:13.374 "is_configured": true, 00:16:13.374 "data_offset": 256, 00:16:13.374 "data_size": 7936 00:16:13.374 }, 00:16:13.374 { 00:16:13.374 "name": "BaseBdev2", 00:16:13.374 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:13.374 "is_configured": true, 00:16:13.374 "data_offset": 256, 00:16:13.374 "data_size": 7936 00:16:13.374 } 00:16:13.374 ] 00:16:13.374 }' 00:16:13.375 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.375 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.944 "name": "raid_bdev1", 00:16:13.944 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:13.944 "strip_size_kb": 0, 00:16:13.944 "state": "online", 00:16:13.944 "raid_level": "raid1", 00:16:13.944 "superblock": true, 00:16:13.944 "num_base_bdevs": 2, 00:16:13.944 "num_base_bdevs_discovered": 2, 00:16:13.944 "num_base_bdevs_operational": 2, 00:16:13.944 "base_bdevs_list": [ 00:16:13.944 { 00:16:13.944 "name": "spare", 00:16:13.944 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:13.944 "is_configured": true, 00:16:13.944 "data_offset": 256, 00:16:13.944 "data_size": 7936 00:16:13.944 }, 00:16:13.944 { 00:16:13.944 "name": "BaseBdev2", 00:16:13.944 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:13.944 "is_configured": true, 00:16:13.944 "data_offset": 256, 00:16:13.944 "data_size": 7936 00:16:13.944 } 00:16:13.944 ] 00:16:13.944 }' 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.944 [2024-11-26 12:59:31.584649] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.944 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.203 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.203 "name": "raid_bdev1", 00:16:14.203 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:14.203 "strip_size_kb": 0, 00:16:14.203 "state": "online", 00:16:14.203 "raid_level": "raid1", 00:16:14.203 "superblock": true, 00:16:14.203 "num_base_bdevs": 2, 00:16:14.203 "num_base_bdevs_discovered": 1, 00:16:14.203 "num_base_bdevs_operational": 1, 00:16:14.203 "base_bdevs_list": [ 00:16:14.203 { 00:16:14.203 "name": null, 00:16:14.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.203 "is_configured": false, 00:16:14.203 "data_offset": 0, 00:16:14.203 "data_size": 7936 00:16:14.203 }, 00:16:14.203 { 00:16:14.203 "name": "BaseBdev2", 00:16:14.203 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:14.203 "is_configured": true, 00:16:14.203 "data_offset": 256, 00:16:14.203 "data_size": 7936 00:16:14.203 } 00:16:14.203 ] 00:16:14.203 }' 00:16:14.203 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.203 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.463 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:14.463 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.463 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.463 [2024-11-26 12:59:32.027928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.463 [2024-11-26 12:59:32.028090] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:14.463 [2024-11-26 12:59:32.028172] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:14.463 [2024-11-26 12:59:32.028259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.463 [2024-11-26 12:59:32.029869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:14.463 [2024-11-26 12:59:32.031660] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.463 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.463 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.439 "name": "raid_bdev1", 00:16:15.439 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:15.439 "strip_size_kb": 0, 00:16:15.439 "state": "online", 00:16:15.439 "raid_level": "raid1", 00:16:15.439 "superblock": true, 00:16:15.439 "num_base_bdevs": 2, 00:16:15.439 "num_base_bdevs_discovered": 2, 00:16:15.439 "num_base_bdevs_operational": 2, 00:16:15.439 "process": { 00:16:15.439 "type": "rebuild", 00:16:15.439 "target": "spare", 00:16:15.439 "progress": { 00:16:15.439 "blocks": 2560, 00:16:15.439 "percent": 32 00:16:15.439 } 00:16:15.439 }, 00:16:15.439 "base_bdevs_list": [ 00:16:15.439 { 00:16:15.439 "name": "spare", 00:16:15.439 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:15.439 "is_configured": true, 00:16:15.439 "data_offset": 256, 00:16:15.439 "data_size": 7936 00:16:15.439 }, 00:16:15.439 { 00:16:15.439 "name": "BaseBdev2", 00:16:15.439 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:15.439 "is_configured": true, 00:16:15.439 "data_offset": 256, 00:16:15.439 "data_size": 7936 00:16:15.439 } 00:16:15.439 ] 00:16:15.439 }' 00:16:15.439 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.699 [2024-11-26 12:59:33.203409] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.699 [2024-11-26 12:59:33.235782] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:15.699 [2024-11-26 12:59:33.235900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.699 [2024-11-26 12:59:33.235919] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.699 [2024-11-26 12:59:33.235926] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.699 "name": "raid_bdev1", 00:16:15.699 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:15.699 "strip_size_kb": 0, 00:16:15.699 "state": "online", 00:16:15.699 "raid_level": "raid1", 00:16:15.699 "superblock": true, 00:16:15.699 "num_base_bdevs": 2, 00:16:15.699 "num_base_bdevs_discovered": 1, 00:16:15.699 "num_base_bdevs_operational": 1, 00:16:15.699 "base_bdevs_list": [ 00:16:15.699 { 00:16:15.699 "name": null, 00:16:15.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.699 "is_configured": false, 00:16:15.699 "data_offset": 0, 00:16:15.699 "data_size": 7936 00:16:15.699 }, 00:16:15.699 { 00:16:15.699 "name": "BaseBdev2", 00:16:15.699 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:15.699 "is_configured": true, 00:16:15.699 "data_offset": 256, 00:16:15.699 "data_size": 7936 00:16:15.699 } 00:16:15.699 ] 00:16:15.699 }' 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.699 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.269 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:16.269 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.269 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.269 [2024-11-26 12:59:33.686806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:16.269 [2024-11-26 12:59:33.686859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.269 [2024-11-26 12:59:33.686882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:16.269 [2024-11-26 12:59:33.686891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.269 [2024-11-26 12:59:33.687079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.269 [2024-11-26 12:59:33.687092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:16.269 [2024-11-26 12:59:33.687142] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:16.269 [2024-11-26 12:59:33.687151] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:16.269 [2024-11-26 12:59:33.687164] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:16.269 [2024-11-26 12:59:33.687199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.269 [2024-11-26 12:59:33.688441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:16.269 [2024-11-26 12:59:33.690257] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.269 spare 00:16:16.269 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.269 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:17.208 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.209 "name": "raid_bdev1", 00:16:17.209 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:17.209 "strip_size_kb": 0, 00:16:17.209 "state": "online", 00:16:17.209 "raid_level": "raid1", 00:16:17.209 "superblock": true, 00:16:17.209 "num_base_bdevs": 2, 00:16:17.209 "num_base_bdevs_discovered": 2, 00:16:17.209 "num_base_bdevs_operational": 2, 00:16:17.209 "process": { 00:16:17.209 "type": "rebuild", 00:16:17.209 "target": "spare", 00:16:17.209 "progress": { 00:16:17.209 "blocks": 2560, 00:16:17.209 "percent": 32 00:16:17.209 } 00:16:17.209 }, 00:16:17.209 "base_bdevs_list": [ 00:16:17.209 { 00:16:17.209 "name": "spare", 00:16:17.209 "uuid": "dab971ad-737d-5dfd-a75f-bfe9a313c83a", 00:16:17.209 "is_configured": true, 00:16:17.209 "data_offset": 256, 00:16:17.209 "data_size": 7936 00:16:17.209 }, 00:16:17.209 { 00:16:17.209 "name": "BaseBdev2", 00:16:17.209 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:17.209 "is_configured": true, 00:16:17.209 "data_offset": 256, 00:16:17.209 "data_size": 7936 00:16:17.209 } 00:16:17.209 ] 00:16:17.209 }' 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.209 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.209 [2024-11-26 12:59:34.849382] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.469 [2024-11-26 12:59:34.894145] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:17.469 [2024-11-26 12:59:34.894215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.469 [2024-11-26 12:59:34.894230] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.469 [2024-11-26 12:59:34.894238] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.469 "name": "raid_bdev1", 00:16:17.469 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:17.469 "strip_size_kb": 0, 00:16:17.469 "state": "online", 00:16:17.469 "raid_level": "raid1", 00:16:17.469 "superblock": true, 00:16:17.469 "num_base_bdevs": 2, 00:16:17.469 "num_base_bdevs_discovered": 1, 00:16:17.469 "num_base_bdevs_operational": 1, 00:16:17.469 "base_bdevs_list": [ 00:16:17.469 { 00:16:17.469 "name": null, 00:16:17.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.469 "is_configured": false, 00:16:17.469 "data_offset": 0, 00:16:17.469 "data_size": 7936 00:16:17.469 }, 00:16:17.469 { 00:16:17.469 "name": "BaseBdev2", 00:16:17.469 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:17.469 "is_configured": true, 00:16:17.469 "data_offset": 256, 00:16:17.469 "data_size": 7936 00:16:17.469 } 00:16:17.469 ] 00:16:17.469 }' 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.469 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.729 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.729 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.729 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.729 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.729 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.729 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.729 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.729 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.729 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.729 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.989 "name": "raid_bdev1", 00:16:17.989 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:17.989 "strip_size_kb": 0, 00:16:17.989 "state": "online", 00:16:17.989 "raid_level": "raid1", 00:16:17.989 "superblock": true, 00:16:17.989 "num_base_bdevs": 2, 00:16:17.989 "num_base_bdevs_discovered": 1, 00:16:17.989 "num_base_bdevs_operational": 1, 00:16:17.989 "base_bdevs_list": [ 00:16:17.989 { 00:16:17.989 "name": null, 00:16:17.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.989 "is_configured": false, 00:16:17.989 "data_offset": 0, 00:16:17.989 "data_size": 7936 00:16:17.989 }, 00:16:17.989 { 00:16:17.989 "name": "BaseBdev2", 00:16:17.989 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:17.989 "is_configured": true, 00:16:17.989 "data_offset": 256, 00:16:17.989 "data_size": 7936 00:16:17.989 } 00:16:17.989 ] 00:16:17.989 }' 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.989 [2024-11-26 12:59:35.528855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:17.989 [2024-11-26 12:59:35.528951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.989 [2024-11-26 12:59:35.528972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:17.989 [2024-11-26 12:59:35.528983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.989 [2024-11-26 12:59:35.529179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.989 [2024-11-26 12:59:35.529196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:17.989 [2024-11-26 12:59:35.529248] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:17.989 [2024-11-26 12:59:35.529264] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:17.989 [2024-11-26 12:59:35.529272] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:17.989 [2024-11-26 12:59:35.529283] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:17.989 BaseBdev1 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.989 12:59:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.930 "name": "raid_bdev1", 00:16:18.930 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:18.930 "strip_size_kb": 0, 00:16:18.930 "state": "online", 00:16:18.930 "raid_level": "raid1", 00:16:18.930 "superblock": true, 00:16:18.930 "num_base_bdevs": 2, 00:16:18.930 "num_base_bdevs_discovered": 1, 00:16:18.930 "num_base_bdevs_operational": 1, 00:16:18.930 "base_bdevs_list": [ 00:16:18.930 { 00:16:18.930 "name": null, 00:16:18.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.930 "is_configured": false, 00:16:18.930 "data_offset": 0, 00:16:18.930 "data_size": 7936 00:16:18.930 }, 00:16:18.930 { 00:16:18.930 "name": "BaseBdev2", 00:16:18.930 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:18.930 "is_configured": true, 00:16:18.930 "data_offset": 256, 00:16:18.930 "data_size": 7936 00:16:18.930 } 00:16:18.930 ] 00:16:18.930 }' 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.930 12:59:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.500 "name": "raid_bdev1", 00:16:19.500 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:19.500 "strip_size_kb": 0, 00:16:19.500 "state": "online", 00:16:19.500 "raid_level": "raid1", 00:16:19.500 "superblock": true, 00:16:19.500 "num_base_bdevs": 2, 00:16:19.500 "num_base_bdevs_discovered": 1, 00:16:19.500 "num_base_bdevs_operational": 1, 00:16:19.500 "base_bdevs_list": [ 00:16:19.500 { 00:16:19.500 "name": null, 00:16:19.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.500 "is_configured": false, 00:16:19.500 "data_offset": 0, 00:16:19.500 "data_size": 7936 00:16:19.500 }, 00:16:19.500 { 00:16:19.500 "name": "BaseBdev2", 00:16:19.500 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:19.500 "is_configured": true, 00:16:19.500 "data_offset": 256, 00:16:19.500 "data_size": 7936 00:16:19.500 } 00:16:19.500 ] 00:16:19.500 }' 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.500 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.500 [2024-11-26 12:59:37.170264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.500 [2024-11-26 12:59:37.170430] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:19.500 [2024-11-26 12:59:37.170481] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:19.500 request: 00:16:19.500 { 00:16:19.500 "base_bdev": "BaseBdev1", 00:16:19.500 "raid_bdev": "raid_bdev1", 00:16:19.500 "method": "bdev_raid_add_base_bdev", 00:16:19.500 "req_id": 1 00:16:19.500 } 00:16:19.500 Got JSON-RPC error response 00:16:19.500 response: 00:16:19.500 { 00:16:19.500 "code": -22, 00:16:19.500 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:19.760 } 00:16:19.760 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:19.760 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:19.760 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:19.760 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:19.760 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:19.760 12:59:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.699 "name": "raid_bdev1", 00:16:20.699 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:20.699 "strip_size_kb": 0, 00:16:20.699 "state": "online", 00:16:20.699 "raid_level": "raid1", 00:16:20.699 "superblock": true, 00:16:20.699 "num_base_bdevs": 2, 00:16:20.699 "num_base_bdevs_discovered": 1, 00:16:20.699 "num_base_bdevs_operational": 1, 00:16:20.699 "base_bdevs_list": [ 00:16:20.699 { 00:16:20.699 "name": null, 00:16:20.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.699 "is_configured": false, 00:16:20.699 "data_offset": 0, 00:16:20.699 "data_size": 7936 00:16:20.699 }, 00:16:20.699 { 00:16:20.699 "name": "BaseBdev2", 00:16:20.699 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:20.699 "is_configured": true, 00:16:20.699 "data_offset": 256, 00:16:20.699 "data_size": 7936 00:16:20.699 } 00:16:20.699 ] 00:16:20.699 }' 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.699 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.269 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.269 "name": "raid_bdev1", 00:16:21.269 "uuid": "c3ae73f2-bd03-4a83-b229-b739117f1716", 00:16:21.269 "strip_size_kb": 0, 00:16:21.269 "state": "online", 00:16:21.269 "raid_level": "raid1", 00:16:21.269 "superblock": true, 00:16:21.269 "num_base_bdevs": 2, 00:16:21.269 "num_base_bdevs_discovered": 1, 00:16:21.269 "num_base_bdevs_operational": 1, 00:16:21.269 "base_bdevs_list": [ 00:16:21.269 { 00:16:21.269 "name": null, 00:16:21.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.269 "is_configured": false, 00:16:21.269 "data_offset": 0, 00:16:21.269 "data_size": 7936 00:16:21.269 }, 00:16:21.269 { 00:16:21.269 "name": "BaseBdev2", 00:16:21.269 "uuid": "2fbf78ad-aa64-5ba0-84c6-3ca66329fc67", 00:16:21.269 "is_configured": true, 00:16:21.269 "data_offset": 256, 00:16:21.269 "data_size": 7936 00:16:21.269 } 00:16:21.269 ] 00:16:21.269 }' 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98322 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98322 ']' 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98322 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98322 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.270 killing process with pid 98322 00:16:21.270 Received shutdown signal, test time was about 60.000000 seconds 00:16:21.270 00:16:21.270 Latency(us) 00:16:21.270 [2024-11-26T12:59:38.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.270 [2024-11-26T12:59:38.954Z] =================================================================================================================== 00:16:21.270 [2024-11-26T12:59:38.954Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98322' 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98322 00:16:21.270 [2024-11-26 12:59:38.835821] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.270 [2024-11-26 12:59:38.835939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.270 [2024-11-26 12:59:38.835982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.270 [2024-11-26 12:59:38.835991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:21.270 12:59:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98322 00:16:21.270 [2024-11-26 12:59:38.869054] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.531 12:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:21.531 00:16:21.531 real 0m18.572s 00:16:21.531 user 0m24.813s 00:16:21.531 sys 0m2.699s 00:16:21.531 12:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.531 12:59:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.531 ************************************ 00:16:21.531 END TEST raid_rebuild_test_sb_md_separate 00:16:21.531 ************************************ 00:16:21.531 12:59:39 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:21.531 12:59:39 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:21.531 12:59:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:21.531 12:59:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.531 12:59:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.531 ************************************ 00:16:21.531 START TEST raid_state_function_test_sb_md_interleaved 00:16:21.531 ************************************ 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.531 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=99002 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99002' 00:16:21.532 Process raid pid: 99002 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 99002 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99002 ']' 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.532 12:59:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.791 [2024-11-26 12:59:39.290192] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:21.791 [2024-11-26 12:59:39.290383] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.791 [2024-11-26 12:59:39.455636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.051 [2024-11-26 12:59:39.504705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.051 [2024-11-26 12:59:39.546649] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.051 [2024-11-26 12:59:39.546732] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.620 [2024-11-26 12:59:40.107991] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.620 [2024-11-26 12:59:40.108099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.620 [2024-11-26 12:59:40.108116] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.620 [2024-11-26 12:59:40.108125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.620 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.621 "name": "Existed_Raid", 00:16:22.621 "uuid": "75df2c43-998a-434e-b27f-d1bf19261113", 00:16:22.621 "strip_size_kb": 0, 00:16:22.621 "state": "configuring", 00:16:22.621 "raid_level": "raid1", 00:16:22.621 "superblock": true, 00:16:22.621 "num_base_bdevs": 2, 00:16:22.621 "num_base_bdevs_discovered": 0, 00:16:22.621 "num_base_bdevs_operational": 2, 00:16:22.621 "base_bdevs_list": [ 00:16:22.621 { 00:16:22.621 "name": "BaseBdev1", 00:16:22.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.621 "is_configured": false, 00:16:22.621 "data_offset": 0, 00:16:22.621 "data_size": 0 00:16:22.621 }, 00:16:22.621 { 00:16:22.621 "name": "BaseBdev2", 00:16:22.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.621 "is_configured": false, 00:16:22.621 "data_offset": 0, 00:16:22.621 "data_size": 0 00:16:22.621 } 00:16:22.621 ] 00:16:22.621 }' 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.621 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.881 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:22.881 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.881 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.881 [2024-11-26 12:59:40.543235] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:22.881 [2024-11-26 12:59:40.543326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:22.881 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.881 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:22.881 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.881 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.881 [2024-11-26 12:59:40.555258] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.881 [2024-11-26 12:59:40.555333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.881 [2024-11-26 12:59:40.555359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.881 [2024-11-26 12:59:40.555381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.141 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.141 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:23.141 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.142 [2024-11-26 12:59:40.576358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.142 BaseBdev1 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.142 [ 00:16:23.142 { 00:16:23.142 "name": "BaseBdev1", 00:16:23.142 "aliases": [ 00:16:23.142 "dbd127af-a43b-4166-a983-cfdea4a2517c" 00:16:23.142 ], 00:16:23.142 "product_name": "Malloc disk", 00:16:23.142 "block_size": 4128, 00:16:23.142 "num_blocks": 8192, 00:16:23.142 "uuid": "dbd127af-a43b-4166-a983-cfdea4a2517c", 00:16:23.142 "md_size": 32, 00:16:23.142 "md_interleave": true, 00:16:23.142 "dif_type": 0, 00:16:23.142 "assigned_rate_limits": { 00:16:23.142 "rw_ios_per_sec": 0, 00:16:23.142 "rw_mbytes_per_sec": 0, 00:16:23.142 "r_mbytes_per_sec": 0, 00:16:23.142 "w_mbytes_per_sec": 0 00:16:23.142 }, 00:16:23.142 "claimed": true, 00:16:23.142 "claim_type": "exclusive_write", 00:16:23.142 "zoned": false, 00:16:23.142 "supported_io_types": { 00:16:23.142 "read": true, 00:16:23.142 "write": true, 00:16:23.142 "unmap": true, 00:16:23.142 "flush": true, 00:16:23.142 "reset": true, 00:16:23.142 "nvme_admin": false, 00:16:23.142 "nvme_io": false, 00:16:23.142 "nvme_io_md": false, 00:16:23.142 "write_zeroes": true, 00:16:23.142 "zcopy": true, 00:16:23.142 "get_zone_info": false, 00:16:23.142 "zone_management": false, 00:16:23.142 "zone_append": false, 00:16:23.142 "compare": false, 00:16:23.142 "compare_and_write": false, 00:16:23.142 "abort": true, 00:16:23.142 "seek_hole": false, 00:16:23.142 "seek_data": false, 00:16:23.142 "copy": true, 00:16:23.142 "nvme_iov_md": false 00:16:23.142 }, 00:16:23.142 "memory_domains": [ 00:16:23.142 { 00:16:23.142 "dma_device_id": "system", 00:16:23.142 "dma_device_type": 1 00:16:23.142 }, 00:16:23.142 { 00:16:23.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.142 "dma_device_type": 2 00:16:23.142 } 00:16:23.142 ], 00:16:23.142 "driver_specific": {} 00:16:23.142 } 00:16:23.142 ] 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.142 "name": "Existed_Raid", 00:16:23.142 "uuid": "5301a653-b204-48f9-9a6f-cf09bcc73446", 00:16:23.142 "strip_size_kb": 0, 00:16:23.142 "state": "configuring", 00:16:23.142 "raid_level": "raid1", 00:16:23.142 "superblock": true, 00:16:23.142 "num_base_bdevs": 2, 00:16:23.142 "num_base_bdevs_discovered": 1, 00:16:23.142 "num_base_bdevs_operational": 2, 00:16:23.142 "base_bdevs_list": [ 00:16:23.142 { 00:16:23.142 "name": "BaseBdev1", 00:16:23.142 "uuid": "dbd127af-a43b-4166-a983-cfdea4a2517c", 00:16:23.142 "is_configured": true, 00:16:23.142 "data_offset": 256, 00:16:23.142 "data_size": 7936 00:16:23.142 }, 00:16:23.142 { 00:16:23.142 "name": "BaseBdev2", 00:16:23.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.142 "is_configured": false, 00:16:23.142 "data_offset": 0, 00:16:23.142 "data_size": 0 00:16:23.142 } 00:16:23.142 ] 00:16:23.142 }' 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.142 12:59:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.711 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:23.711 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.711 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.711 [2024-11-26 12:59:41.087535] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.711 [2024-11-26 12:59:41.087632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:23.711 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.711 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.712 [2024-11-26 12:59:41.099585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.712 [2024-11-26 12:59:41.101270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.712 [2024-11-26 12:59:41.101302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.712 "name": "Existed_Raid", 00:16:23.712 "uuid": "9bac2baf-6db4-49c9-a3c1-c72cb5f0d0fb", 00:16:23.712 "strip_size_kb": 0, 00:16:23.712 "state": "configuring", 00:16:23.712 "raid_level": "raid1", 00:16:23.712 "superblock": true, 00:16:23.712 "num_base_bdevs": 2, 00:16:23.712 "num_base_bdevs_discovered": 1, 00:16:23.712 "num_base_bdevs_operational": 2, 00:16:23.712 "base_bdevs_list": [ 00:16:23.712 { 00:16:23.712 "name": "BaseBdev1", 00:16:23.712 "uuid": "dbd127af-a43b-4166-a983-cfdea4a2517c", 00:16:23.712 "is_configured": true, 00:16:23.712 "data_offset": 256, 00:16:23.712 "data_size": 7936 00:16:23.712 }, 00:16:23.712 { 00:16:23.712 "name": "BaseBdev2", 00:16:23.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.712 "is_configured": false, 00:16:23.712 "data_offset": 0, 00:16:23.712 "data_size": 0 00:16:23.712 } 00:16:23.712 ] 00:16:23.712 }' 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.712 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.972 [2024-11-26 12:59:41.616970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.972 [2024-11-26 12:59:41.617698] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:23.972 [2024-11-26 12:59:41.617875] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:23.972 BaseBdev2 00:16:23.972 [2024-11-26 12:59:41.618351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:23.972 [2024-11-26 12:59:41.618682] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:23.972 [2024-11-26 12:59:41.618825] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.972 [2024-11-26 12:59:41.619409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.972 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.972 [ 00:16:23.972 { 00:16:23.972 "name": "BaseBdev2", 00:16:23.972 "aliases": [ 00:16:23.972 "6a0a53ce-c3d9-48de-b22e-3078a95a3cf6" 00:16:23.972 ], 00:16:23.972 "product_name": "Malloc disk", 00:16:23.972 "block_size": 4128, 00:16:23.972 "num_blocks": 8192, 00:16:23.972 "uuid": "6a0a53ce-c3d9-48de-b22e-3078a95a3cf6", 00:16:23.972 "md_size": 32, 00:16:23.972 "md_interleave": true, 00:16:24.232 "dif_type": 0, 00:16:24.232 "assigned_rate_limits": { 00:16:24.232 "rw_ios_per_sec": 0, 00:16:24.232 "rw_mbytes_per_sec": 0, 00:16:24.232 "r_mbytes_per_sec": 0, 00:16:24.232 "w_mbytes_per_sec": 0 00:16:24.232 }, 00:16:24.232 "claimed": true, 00:16:24.232 "claim_type": "exclusive_write", 00:16:24.232 "zoned": false, 00:16:24.232 "supported_io_types": { 00:16:24.232 "read": true, 00:16:24.232 "write": true, 00:16:24.232 "unmap": true, 00:16:24.232 "flush": true, 00:16:24.232 "reset": true, 00:16:24.232 "nvme_admin": false, 00:16:24.232 "nvme_io": false, 00:16:24.232 "nvme_io_md": false, 00:16:24.232 "write_zeroes": true, 00:16:24.232 "zcopy": true, 00:16:24.232 "get_zone_info": false, 00:16:24.232 "zone_management": false, 00:16:24.232 "zone_append": false, 00:16:24.232 "compare": false, 00:16:24.232 "compare_and_write": false, 00:16:24.232 "abort": true, 00:16:24.232 "seek_hole": false, 00:16:24.232 "seek_data": false, 00:16:24.232 "copy": true, 00:16:24.232 "nvme_iov_md": false 00:16:24.232 }, 00:16:24.232 "memory_domains": [ 00:16:24.232 { 00:16:24.232 "dma_device_id": "system", 00:16:24.232 "dma_device_type": 1 00:16:24.232 }, 00:16:24.232 { 00:16:24.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.232 "dma_device_type": 2 00:16:24.232 } 00:16:24.232 ], 00:16:24.232 "driver_specific": {} 00:16:24.232 } 00:16:24.232 ] 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.232 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.232 "name": "Existed_Raid", 00:16:24.232 "uuid": "9bac2baf-6db4-49c9-a3c1-c72cb5f0d0fb", 00:16:24.232 "strip_size_kb": 0, 00:16:24.232 "state": "online", 00:16:24.232 "raid_level": "raid1", 00:16:24.232 "superblock": true, 00:16:24.232 "num_base_bdevs": 2, 00:16:24.232 "num_base_bdevs_discovered": 2, 00:16:24.233 "num_base_bdevs_operational": 2, 00:16:24.233 "base_bdevs_list": [ 00:16:24.233 { 00:16:24.233 "name": "BaseBdev1", 00:16:24.233 "uuid": "dbd127af-a43b-4166-a983-cfdea4a2517c", 00:16:24.233 "is_configured": true, 00:16:24.233 "data_offset": 256, 00:16:24.233 "data_size": 7936 00:16:24.233 }, 00:16:24.233 { 00:16:24.233 "name": "BaseBdev2", 00:16:24.233 "uuid": "6a0a53ce-c3d9-48de-b22e-3078a95a3cf6", 00:16:24.233 "is_configured": true, 00:16:24.233 "data_offset": 256, 00:16:24.233 "data_size": 7936 00:16:24.233 } 00:16:24.233 ] 00:16:24.233 }' 00:16:24.233 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.233 12:59:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.493 [2024-11-26 12:59:42.096422] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:24.493 "name": "Existed_Raid", 00:16:24.493 "aliases": [ 00:16:24.493 "9bac2baf-6db4-49c9-a3c1-c72cb5f0d0fb" 00:16:24.493 ], 00:16:24.493 "product_name": "Raid Volume", 00:16:24.493 "block_size": 4128, 00:16:24.493 "num_blocks": 7936, 00:16:24.493 "uuid": "9bac2baf-6db4-49c9-a3c1-c72cb5f0d0fb", 00:16:24.493 "md_size": 32, 00:16:24.493 "md_interleave": true, 00:16:24.493 "dif_type": 0, 00:16:24.493 "assigned_rate_limits": { 00:16:24.493 "rw_ios_per_sec": 0, 00:16:24.493 "rw_mbytes_per_sec": 0, 00:16:24.493 "r_mbytes_per_sec": 0, 00:16:24.493 "w_mbytes_per_sec": 0 00:16:24.493 }, 00:16:24.493 "claimed": false, 00:16:24.493 "zoned": false, 00:16:24.493 "supported_io_types": { 00:16:24.493 "read": true, 00:16:24.493 "write": true, 00:16:24.493 "unmap": false, 00:16:24.493 "flush": false, 00:16:24.493 "reset": true, 00:16:24.493 "nvme_admin": false, 00:16:24.493 "nvme_io": false, 00:16:24.493 "nvme_io_md": false, 00:16:24.493 "write_zeroes": true, 00:16:24.493 "zcopy": false, 00:16:24.493 "get_zone_info": false, 00:16:24.493 "zone_management": false, 00:16:24.493 "zone_append": false, 00:16:24.493 "compare": false, 00:16:24.493 "compare_and_write": false, 00:16:24.493 "abort": false, 00:16:24.493 "seek_hole": false, 00:16:24.493 "seek_data": false, 00:16:24.493 "copy": false, 00:16:24.493 "nvme_iov_md": false 00:16:24.493 }, 00:16:24.493 "memory_domains": [ 00:16:24.493 { 00:16:24.493 "dma_device_id": "system", 00:16:24.493 "dma_device_type": 1 00:16:24.493 }, 00:16:24.493 { 00:16:24.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.493 "dma_device_type": 2 00:16:24.493 }, 00:16:24.493 { 00:16:24.493 "dma_device_id": "system", 00:16:24.493 "dma_device_type": 1 00:16:24.493 }, 00:16:24.493 { 00:16:24.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.493 "dma_device_type": 2 00:16:24.493 } 00:16:24.493 ], 00:16:24.493 "driver_specific": { 00:16:24.493 "raid": { 00:16:24.493 "uuid": "9bac2baf-6db4-49c9-a3c1-c72cb5f0d0fb", 00:16:24.493 "strip_size_kb": 0, 00:16:24.493 "state": "online", 00:16:24.493 "raid_level": "raid1", 00:16:24.493 "superblock": true, 00:16:24.493 "num_base_bdevs": 2, 00:16:24.493 "num_base_bdevs_discovered": 2, 00:16:24.493 "num_base_bdevs_operational": 2, 00:16:24.493 "base_bdevs_list": [ 00:16:24.493 { 00:16:24.493 "name": "BaseBdev1", 00:16:24.493 "uuid": "dbd127af-a43b-4166-a983-cfdea4a2517c", 00:16:24.493 "is_configured": true, 00:16:24.493 "data_offset": 256, 00:16:24.493 "data_size": 7936 00:16:24.493 }, 00:16:24.493 { 00:16:24.493 "name": "BaseBdev2", 00:16:24.493 "uuid": "6a0a53ce-c3d9-48de-b22e-3078a95a3cf6", 00:16:24.493 "is_configured": true, 00:16:24.493 "data_offset": 256, 00:16:24.493 "data_size": 7936 00:16:24.493 } 00:16:24.493 ] 00:16:24.493 } 00:16:24.493 } 00:16:24.493 }' 00:16:24.493 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:24.754 BaseBdev2' 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.754 [2024-11-26 12:59:42.303912] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.754 "name": "Existed_Raid", 00:16:24.754 "uuid": "9bac2baf-6db4-49c9-a3c1-c72cb5f0d0fb", 00:16:24.754 "strip_size_kb": 0, 00:16:24.754 "state": "online", 00:16:24.754 "raid_level": "raid1", 00:16:24.754 "superblock": true, 00:16:24.754 "num_base_bdevs": 2, 00:16:24.754 "num_base_bdevs_discovered": 1, 00:16:24.754 "num_base_bdevs_operational": 1, 00:16:24.754 "base_bdevs_list": [ 00:16:24.754 { 00:16:24.754 "name": null, 00:16:24.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.754 "is_configured": false, 00:16:24.754 "data_offset": 0, 00:16:24.754 "data_size": 7936 00:16:24.754 }, 00:16:24.754 { 00:16:24.754 "name": "BaseBdev2", 00:16:24.754 "uuid": "6a0a53ce-c3d9-48de-b22e-3078a95a3cf6", 00:16:24.754 "is_configured": true, 00:16:24.754 "data_offset": 256, 00:16:24.754 "data_size": 7936 00:16:24.754 } 00:16:24.754 ] 00:16:24.754 }' 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.754 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.325 [2024-11-26 12:59:42.810865] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:25.325 [2024-11-26 12:59:42.810952] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.325 [2024-11-26 12:59:42.822667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.325 [2024-11-26 12:59:42.822719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.325 [2024-11-26 12:59:42.822731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 99002 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99002 ']' 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99002 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99002 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:25.325 killing process with pid 99002 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99002' 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99002 00:16:25.325 [2024-11-26 12:59:42.923353] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.325 12:59:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99002 00:16:25.325 [2024-11-26 12:59:42.924305] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.587 12:59:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:25.587 00:16:25.587 real 0m4.002s 00:16:25.587 user 0m6.268s 00:16:25.587 sys 0m0.847s 00:16:25.587 ************************************ 00:16:25.587 END TEST raid_state_function_test_sb_md_interleaved 00:16:25.587 ************************************ 00:16:25.587 12:59:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.587 12:59:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.587 12:59:43 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:25.587 12:59:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:25.587 12:59:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.587 12:59:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.848 ************************************ 00:16:25.848 START TEST raid_superblock_test_md_interleaved 00:16:25.848 ************************************ 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99244 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99244 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99244 ']' 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.848 12:59:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.848 [2024-11-26 12:59:43.366884] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:25.848 [2024-11-26 12:59:43.367125] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99244 ] 00:16:26.109 [2024-11-26 12:59:43.532526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.109 [2024-11-26 12:59:43.578187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.109 [2024-11-26 12:59:43.620837] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.109 [2024-11-26 12:59:43.620962] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.679 malloc1 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.679 [2024-11-26 12:59:44.223086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.679 [2024-11-26 12:59:44.223146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.679 [2024-11-26 12:59:44.223185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:26.679 [2024-11-26 12:59:44.223196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.679 [2024-11-26 12:59:44.225064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.679 [2024-11-26 12:59:44.225106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.679 pt1 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.679 malloc2 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.679 [2024-11-26 12:59:44.269045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.679 [2024-11-26 12:59:44.269254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.679 [2024-11-26 12:59:44.269337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:26.679 [2024-11-26 12:59:44.269410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.679 [2024-11-26 12:59:44.272879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.679 [2024-11-26 12:59:44.272995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.679 pt2 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.679 [2024-11-26 12:59:44.281261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:26.679 [2024-11-26 12:59:44.283336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.679 [2024-11-26 12:59:44.283541] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:26.679 [2024-11-26 12:59:44.283597] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:26.679 [2024-11-26 12:59:44.283700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:26.679 [2024-11-26 12:59:44.283819] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:26.679 [2024-11-26 12:59:44.283879] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:26.679 [2024-11-26 12:59:44.283990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.679 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.679 "name": "raid_bdev1", 00:16:26.679 "uuid": "568312b1-0652-4736-b480-b674928266df", 00:16:26.679 "strip_size_kb": 0, 00:16:26.679 "state": "online", 00:16:26.679 "raid_level": "raid1", 00:16:26.679 "superblock": true, 00:16:26.679 "num_base_bdevs": 2, 00:16:26.680 "num_base_bdevs_discovered": 2, 00:16:26.680 "num_base_bdevs_operational": 2, 00:16:26.680 "base_bdevs_list": [ 00:16:26.680 { 00:16:26.680 "name": "pt1", 00:16:26.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.680 "is_configured": true, 00:16:26.680 "data_offset": 256, 00:16:26.680 "data_size": 7936 00:16:26.680 }, 00:16:26.680 { 00:16:26.680 "name": "pt2", 00:16:26.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.680 "is_configured": true, 00:16:26.680 "data_offset": 256, 00:16:26.680 "data_size": 7936 00:16:26.680 } 00:16:26.680 ] 00:16:26.680 }' 00:16:26.680 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.680 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.250 [2024-11-26 12:59:44.732706] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.250 "name": "raid_bdev1", 00:16:27.250 "aliases": [ 00:16:27.250 "568312b1-0652-4736-b480-b674928266df" 00:16:27.250 ], 00:16:27.250 "product_name": "Raid Volume", 00:16:27.250 "block_size": 4128, 00:16:27.250 "num_blocks": 7936, 00:16:27.250 "uuid": "568312b1-0652-4736-b480-b674928266df", 00:16:27.250 "md_size": 32, 00:16:27.250 "md_interleave": true, 00:16:27.250 "dif_type": 0, 00:16:27.250 "assigned_rate_limits": { 00:16:27.250 "rw_ios_per_sec": 0, 00:16:27.250 "rw_mbytes_per_sec": 0, 00:16:27.250 "r_mbytes_per_sec": 0, 00:16:27.250 "w_mbytes_per_sec": 0 00:16:27.250 }, 00:16:27.250 "claimed": false, 00:16:27.250 "zoned": false, 00:16:27.250 "supported_io_types": { 00:16:27.250 "read": true, 00:16:27.250 "write": true, 00:16:27.250 "unmap": false, 00:16:27.250 "flush": false, 00:16:27.250 "reset": true, 00:16:27.250 "nvme_admin": false, 00:16:27.250 "nvme_io": false, 00:16:27.250 "nvme_io_md": false, 00:16:27.250 "write_zeroes": true, 00:16:27.250 "zcopy": false, 00:16:27.250 "get_zone_info": false, 00:16:27.250 "zone_management": false, 00:16:27.250 "zone_append": false, 00:16:27.250 "compare": false, 00:16:27.250 "compare_and_write": false, 00:16:27.250 "abort": false, 00:16:27.250 "seek_hole": false, 00:16:27.250 "seek_data": false, 00:16:27.250 "copy": false, 00:16:27.250 "nvme_iov_md": false 00:16:27.250 }, 00:16:27.250 "memory_domains": [ 00:16:27.250 { 00:16:27.250 "dma_device_id": "system", 00:16:27.250 "dma_device_type": 1 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.250 "dma_device_type": 2 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "dma_device_id": "system", 00:16:27.250 "dma_device_type": 1 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.250 "dma_device_type": 2 00:16:27.250 } 00:16:27.250 ], 00:16:27.250 "driver_specific": { 00:16:27.250 "raid": { 00:16:27.250 "uuid": "568312b1-0652-4736-b480-b674928266df", 00:16:27.250 "strip_size_kb": 0, 00:16:27.250 "state": "online", 00:16:27.250 "raid_level": "raid1", 00:16:27.250 "superblock": true, 00:16:27.250 "num_base_bdevs": 2, 00:16:27.250 "num_base_bdevs_discovered": 2, 00:16:27.250 "num_base_bdevs_operational": 2, 00:16:27.250 "base_bdevs_list": [ 00:16:27.250 { 00:16:27.250 "name": "pt1", 00:16:27.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.250 "is_configured": true, 00:16:27.250 "data_offset": 256, 00:16:27.250 "data_size": 7936 00:16:27.250 }, 00:16:27.250 { 00:16:27.250 "name": "pt2", 00:16:27.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.250 "is_configured": true, 00:16:27.250 "data_offset": 256, 00:16:27.250 "data_size": 7936 00:16:27.250 } 00:16:27.250 ] 00:16:27.250 } 00:16:27.250 } 00:16:27.250 }' 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:27.250 pt2' 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.250 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.511 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.511 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:27.511 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:27.511 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.511 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.511 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.511 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:27.511 [2024-11-26 12:59:44.976186] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.511 12:59:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.511 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=568312b1-0652-4736-b480-b674928266df 00:16:27.511 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 568312b1-0652-4736-b480-b674928266df ']' 00:16:27.511 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:27.511 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.511 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.511 [2024-11-26 12:59:45.023951] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.511 [2024-11-26 12:59:45.024015] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.512 [2024-11-26 12:59:45.024103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.512 [2024-11-26 12:59:45.024210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.512 [2024-11-26 12:59:45.024264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.512 [2024-11-26 12:59:45.163754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:27.512 [2024-11-26 12:59:45.165636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:27.512 [2024-11-26 12:59:45.165747] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:27.512 [2024-11-26 12:59:45.165788] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:27.512 [2024-11-26 12:59:45.165803] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.512 [2024-11-26 12:59:45.165818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:27.512 request: 00:16:27.512 { 00:16:27.512 "name": "raid_bdev1", 00:16:27.512 "raid_level": "raid1", 00:16:27.512 "base_bdevs": [ 00:16:27.512 "malloc1", 00:16:27.512 "malloc2" 00:16:27.512 ], 00:16:27.512 "superblock": false, 00:16:27.512 "method": "bdev_raid_create", 00:16:27.512 "req_id": 1 00:16:27.512 } 00:16:27.512 Got JSON-RPC error response 00:16:27.512 response: 00:16:27.512 { 00:16:27.512 "code": -17, 00:16:27.512 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:27.512 } 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.512 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.773 [2024-11-26 12:59:45.219613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:27.773 [2024-11-26 12:59:45.219709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.773 [2024-11-26 12:59:45.219741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:27.773 [2024-11-26 12:59:45.219767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.773 [2024-11-26 12:59:45.221568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.773 [2024-11-26 12:59:45.221649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:27.773 [2024-11-26 12:59:45.221710] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:27.773 [2024-11-26 12:59:45.221759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:27.773 pt1 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.773 "name": "raid_bdev1", 00:16:27.773 "uuid": "568312b1-0652-4736-b480-b674928266df", 00:16:27.773 "strip_size_kb": 0, 00:16:27.773 "state": "configuring", 00:16:27.773 "raid_level": "raid1", 00:16:27.773 "superblock": true, 00:16:27.773 "num_base_bdevs": 2, 00:16:27.773 "num_base_bdevs_discovered": 1, 00:16:27.773 "num_base_bdevs_operational": 2, 00:16:27.773 "base_bdevs_list": [ 00:16:27.773 { 00:16:27.773 "name": "pt1", 00:16:27.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.773 "is_configured": true, 00:16:27.773 "data_offset": 256, 00:16:27.773 "data_size": 7936 00:16:27.773 }, 00:16:27.773 { 00:16:27.773 "name": null, 00:16:27.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.773 "is_configured": false, 00:16:27.773 "data_offset": 256, 00:16:27.773 "data_size": 7936 00:16:27.773 } 00:16:27.773 ] 00:16:27.773 }' 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.773 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.034 [2024-11-26 12:59:45.694826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.034 [2024-11-26 12:59:45.694875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.034 [2024-11-26 12:59:45.694893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:28.034 [2024-11-26 12:59:45.694901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.034 [2024-11-26 12:59:45.695000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.034 [2024-11-26 12:59:45.695010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.034 [2024-11-26 12:59:45.695043] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:28.034 [2024-11-26 12:59:45.695057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.034 [2024-11-26 12:59:45.695116] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:28.034 [2024-11-26 12:59:45.695123] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:28.034 [2024-11-26 12:59:45.695210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:28.034 [2024-11-26 12:59:45.695262] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:28.034 [2024-11-26 12:59:45.695274] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:28.034 [2024-11-26 12:59:45.695320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.034 pt2 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.034 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.301 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.301 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.301 "name": "raid_bdev1", 00:16:28.301 "uuid": "568312b1-0652-4736-b480-b674928266df", 00:16:28.301 "strip_size_kb": 0, 00:16:28.301 "state": "online", 00:16:28.301 "raid_level": "raid1", 00:16:28.301 "superblock": true, 00:16:28.301 "num_base_bdevs": 2, 00:16:28.301 "num_base_bdevs_discovered": 2, 00:16:28.301 "num_base_bdevs_operational": 2, 00:16:28.301 "base_bdevs_list": [ 00:16:28.301 { 00:16:28.301 "name": "pt1", 00:16:28.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.301 "is_configured": true, 00:16:28.301 "data_offset": 256, 00:16:28.301 "data_size": 7936 00:16:28.301 }, 00:16:28.301 { 00:16:28.301 "name": "pt2", 00:16:28.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.301 "is_configured": true, 00:16:28.301 "data_offset": 256, 00:16:28.301 "data_size": 7936 00:16:28.301 } 00:16:28.301 ] 00:16:28.301 }' 00:16:28.301 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.301 12:59:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.594 [2024-11-26 12:59:46.130333] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.594 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:28.595 "name": "raid_bdev1", 00:16:28.595 "aliases": [ 00:16:28.595 "568312b1-0652-4736-b480-b674928266df" 00:16:28.595 ], 00:16:28.595 "product_name": "Raid Volume", 00:16:28.595 "block_size": 4128, 00:16:28.595 "num_blocks": 7936, 00:16:28.595 "uuid": "568312b1-0652-4736-b480-b674928266df", 00:16:28.595 "md_size": 32, 00:16:28.595 "md_interleave": true, 00:16:28.595 "dif_type": 0, 00:16:28.595 "assigned_rate_limits": { 00:16:28.595 "rw_ios_per_sec": 0, 00:16:28.595 "rw_mbytes_per_sec": 0, 00:16:28.595 "r_mbytes_per_sec": 0, 00:16:28.595 "w_mbytes_per_sec": 0 00:16:28.595 }, 00:16:28.595 "claimed": false, 00:16:28.595 "zoned": false, 00:16:28.595 "supported_io_types": { 00:16:28.595 "read": true, 00:16:28.595 "write": true, 00:16:28.595 "unmap": false, 00:16:28.595 "flush": false, 00:16:28.595 "reset": true, 00:16:28.595 "nvme_admin": false, 00:16:28.595 "nvme_io": false, 00:16:28.595 "nvme_io_md": false, 00:16:28.595 "write_zeroes": true, 00:16:28.595 "zcopy": false, 00:16:28.595 "get_zone_info": false, 00:16:28.595 "zone_management": false, 00:16:28.595 "zone_append": false, 00:16:28.595 "compare": false, 00:16:28.595 "compare_and_write": false, 00:16:28.595 "abort": false, 00:16:28.595 "seek_hole": false, 00:16:28.595 "seek_data": false, 00:16:28.595 "copy": false, 00:16:28.595 "nvme_iov_md": false 00:16:28.595 }, 00:16:28.595 "memory_domains": [ 00:16:28.595 { 00:16:28.595 "dma_device_id": "system", 00:16:28.595 "dma_device_type": 1 00:16:28.595 }, 00:16:28.595 { 00:16:28.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.595 "dma_device_type": 2 00:16:28.595 }, 00:16:28.595 { 00:16:28.595 "dma_device_id": "system", 00:16:28.595 "dma_device_type": 1 00:16:28.595 }, 00:16:28.595 { 00:16:28.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.595 "dma_device_type": 2 00:16:28.595 } 00:16:28.595 ], 00:16:28.595 "driver_specific": { 00:16:28.595 "raid": { 00:16:28.595 "uuid": "568312b1-0652-4736-b480-b674928266df", 00:16:28.595 "strip_size_kb": 0, 00:16:28.595 "state": "online", 00:16:28.595 "raid_level": "raid1", 00:16:28.595 "superblock": true, 00:16:28.595 "num_base_bdevs": 2, 00:16:28.595 "num_base_bdevs_discovered": 2, 00:16:28.595 "num_base_bdevs_operational": 2, 00:16:28.595 "base_bdevs_list": [ 00:16:28.595 { 00:16:28.595 "name": "pt1", 00:16:28.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.595 "is_configured": true, 00:16:28.595 "data_offset": 256, 00:16:28.595 "data_size": 7936 00:16:28.595 }, 00:16:28.595 { 00:16:28.595 "name": "pt2", 00:16:28.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.595 "is_configured": true, 00:16:28.595 "data_offset": 256, 00:16:28.595 "data_size": 7936 00:16:28.595 } 00:16:28.595 ] 00:16:28.595 } 00:16:28.595 } 00:16:28.595 }' 00:16:28.595 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.595 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:28.595 pt2' 00:16:28.595 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.595 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:28.595 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.595 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:28.595 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.595 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.595 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.595 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.884 [2024-11-26 12:59:46.341933] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 568312b1-0652-4736-b480-b674928266df '!=' 568312b1-0652-4736-b480-b674928266df ']' 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.884 [2024-11-26 12:59:46.385665] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.884 "name": "raid_bdev1", 00:16:28.884 "uuid": "568312b1-0652-4736-b480-b674928266df", 00:16:28.884 "strip_size_kb": 0, 00:16:28.884 "state": "online", 00:16:28.884 "raid_level": "raid1", 00:16:28.884 "superblock": true, 00:16:28.884 "num_base_bdevs": 2, 00:16:28.884 "num_base_bdevs_discovered": 1, 00:16:28.884 "num_base_bdevs_operational": 1, 00:16:28.884 "base_bdevs_list": [ 00:16:28.884 { 00:16:28.884 "name": null, 00:16:28.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.884 "is_configured": false, 00:16:28.884 "data_offset": 0, 00:16:28.884 "data_size": 7936 00:16:28.884 }, 00:16:28.884 { 00:16:28.884 "name": "pt2", 00:16:28.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.884 "is_configured": true, 00:16:28.884 "data_offset": 256, 00:16:28.884 "data_size": 7936 00:16:28.884 } 00:16:28.884 ] 00:16:28.884 }' 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.884 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.456 [2024-11-26 12:59:46.836859] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.456 [2024-11-26 12:59:46.836927] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.456 [2024-11-26 12:59:46.837015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.456 [2024-11-26 12:59:46.837069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.456 [2024-11-26 12:59:46.837137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.456 [2024-11-26 12:59:46.900757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:29.456 [2024-11-26 12:59:46.900837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.456 [2024-11-26 12:59:46.900884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:29.456 [2024-11-26 12:59:46.900910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.456 [2024-11-26 12:59:46.902793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.456 [2024-11-26 12:59:46.902860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:29.456 [2024-11-26 12:59:46.902920] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:29.456 [2024-11-26 12:59:46.902978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.456 [2024-11-26 12:59:46.903065] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:29.456 [2024-11-26 12:59:46.903099] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:29.456 [2024-11-26 12:59:46.903205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:29.456 [2024-11-26 12:59:46.903295] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:29.456 [2024-11-26 12:59:46.903329] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:29.456 [2024-11-26 12:59:46.903410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.456 pt2 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.456 "name": "raid_bdev1", 00:16:29.456 "uuid": "568312b1-0652-4736-b480-b674928266df", 00:16:29.456 "strip_size_kb": 0, 00:16:29.456 "state": "online", 00:16:29.456 "raid_level": "raid1", 00:16:29.456 "superblock": true, 00:16:29.456 "num_base_bdevs": 2, 00:16:29.456 "num_base_bdevs_discovered": 1, 00:16:29.456 "num_base_bdevs_operational": 1, 00:16:29.456 "base_bdevs_list": [ 00:16:29.456 { 00:16:29.456 "name": null, 00:16:29.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.456 "is_configured": false, 00:16:29.456 "data_offset": 256, 00:16:29.456 "data_size": 7936 00:16:29.456 }, 00:16:29.456 { 00:16:29.456 "name": "pt2", 00:16:29.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.456 "is_configured": true, 00:16:29.456 "data_offset": 256, 00:16:29.456 "data_size": 7936 00:16:29.456 } 00:16:29.456 ] 00:16:29.456 }' 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.456 12:59:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.717 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.717 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.717 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.717 [2024-11-26 12:59:47.340015] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.717 [2024-11-26 12:59:47.340080] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.717 [2024-11-26 12:59:47.340130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.717 [2024-11-26 12:59:47.340162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.717 [2024-11-26 12:59:47.340172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:29.717 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.717 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.717 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.717 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:29.717 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.717 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.977 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:29.977 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:29.977 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:29.977 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.977 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.977 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.977 [2024-11-26 12:59:47.403936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.977 [2024-11-26 12:59:47.404021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.977 [2024-11-26 12:59:47.404053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:29.977 [2024-11-26 12:59:47.404087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.977 [2024-11-26 12:59:47.406057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.977 [2024-11-26 12:59:47.406144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.977 [2024-11-26 12:59:47.406212] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:29.977 [2024-11-26 12:59:47.406262] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.977 [2024-11-26 12:59:47.406356] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:29.977 [2024-11-26 12:59:47.406432] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.977 [2024-11-26 12:59:47.406470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:29.977 [2024-11-26 12:59:47.406559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.977 [2024-11-26 12:59:47.406648] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:29.977 [2024-11-26 12:59:47.406687] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:29.978 [2024-11-26 12:59:47.406746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:29.978 [2024-11-26 12:59:47.406802] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:29.978 [2024-11-26 12:59:47.406811] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:29.978 [2024-11-26 12:59:47.406871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.978 pt1 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.978 "name": "raid_bdev1", 00:16:29.978 "uuid": "568312b1-0652-4736-b480-b674928266df", 00:16:29.978 "strip_size_kb": 0, 00:16:29.978 "state": "online", 00:16:29.978 "raid_level": "raid1", 00:16:29.978 "superblock": true, 00:16:29.978 "num_base_bdevs": 2, 00:16:29.978 "num_base_bdevs_discovered": 1, 00:16:29.978 "num_base_bdevs_operational": 1, 00:16:29.978 "base_bdevs_list": [ 00:16:29.978 { 00:16:29.978 "name": null, 00:16:29.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.978 "is_configured": false, 00:16:29.978 "data_offset": 256, 00:16:29.978 "data_size": 7936 00:16:29.978 }, 00:16:29.978 { 00:16:29.978 "name": "pt2", 00:16:29.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.978 "is_configured": true, 00:16:29.978 "data_offset": 256, 00:16:29.978 "data_size": 7936 00:16:29.978 } 00:16:29.978 ] 00:16:29.978 }' 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.978 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:30.238 [2024-11-26 12:59:47.871511] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.238 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 568312b1-0652-4736-b480-b674928266df '!=' 568312b1-0652-4736-b480-b674928266df ']' 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99244 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99244 ']' 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99244 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99244 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99244' 00:16:30.499 killing process with pid 99244 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99244 00:16:30.499 [2024-11-26 12:59:47.950500] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.499 [2024-11-26 12:59:47.950612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.499 [2024-11-26 12:59:47.950678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.499 [2024-11-26 12:59:47.950718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:30.499 12:59:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99244 00:16:30.499 [2024-11-26 12:59:47.974060] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:30.759 12:59:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:30.759 00:16:30.759 real 0m4.953s 00:16:30.759 user 0m7.958s 00:16:30.760 sys 0m1.155s 00:16:30.760 12:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.760 ************************************ 00:16:30.760 END TEST raid_superblock_test_md_interleaved 00:16:30.760 ************************************ 00:16:30.760 12:59:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.760 12:59:48 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:30.760 12:59:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:30.760 12:59:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.760 12:59:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:30.760 ************************************ 00:16:30.760 START TEST raid_rebuild_test_sb_md_interleaved 00:16:30.760 ************************************ 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99556 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99556 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99556 ']' 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.760 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.760 [2024-11-26 12:59:48.417867] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:30.760 [2024-11-26 12:59:48.418133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99556 ] 00:16:30.760 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:30.760 Zero copy mechanism will not be used. 00:16:31.020 [2024-11-26 12:59:48.584778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.020 [2024-11-26 12:59:48.629698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.020 [2024-11-26 12:59:48.672299] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.020 [2024-11-26 12:59:48.672411] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.590 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.590 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:31.590 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:31.590 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:31.590 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.590 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.590 BaseBdev1_malloc 00:16:31.590 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.590 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:31.590 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.590 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.590 [2024-11-26 12:59:49.250714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:31.590 [2024-11-26 12:59:49.250772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.591 [2024-11-26 12:59:49.250814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:31.591 [2024-11-26 12:59:49.250824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.591 [2024-11-26 12:59:49.252765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.591 [2024-11-26 12:59:49.252854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:31.591 BaseBdev1 00:16:31.591 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.591 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:31.591 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:31.591 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.591 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.851 BaseBdev2_malloc 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.851 [2024-11-26 12:59:49.294307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:31.851 [2024-11-26 12:59:49.294430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.851 [2024-11-26 12:59:49.294486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:31.851 [2024-11-26 12:59:49.294514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.851 [2024-11-26 12:59:49.298556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.851 [2024-11-26 12:59:49.298616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:31.851 BaseBdev2 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.851 spare_malloc 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.851 spare_delay 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.851 [2024-11-26 12:59:49.336833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:31.851 [2024-11-26 12:59:49.336882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.851 [2024-11-26 12:59:49.336920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:31.851 [2024-11-26 12:59:49.336928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.851 [2024-11-26 12:59:49.338783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.851 [2024-11-26 12:59:49.338818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:31.851 spare 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.851 [2024-11-26 12:59:49.348841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.851 [2024-11-26 12:59:49.350637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.851 [2024-11-26 12:59:49.350792] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:31.851 [2024-11-26 12:59:49.350805] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:31.851 [2024-11-26 12:59:49.350892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:31.851 [2024-11-26 12:59:49.350957] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:31.851 [2024-11-26 12:59:49.350978] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:31.851 [2024-11-26 12:59:49.351040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.851 "name": "raid_bdev1", 00:16:31.851 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:31.851 "strip_size_kb": 0, 00:16:31.851 "state": "online", 00:16:31.851 "raid_level": "raid1", 00:16:31.851 "superblock": true, 00:16:31.851 "num_base_bdevs": 2, 00:16:31.851 "num_base_bdevs_discovered": 2, 00:16:31.851 "num_base_bdevs_operational": 2, 00:16:31.851 "base_bdevs_list": [ 00:16:31.851 { 00:16:31.851 "name": "BaseBdev1", 00:16:31.851 "uuid": "a1a83448-d489-51ef-bdb3-abf76bc799c0", 00:16:31.851 "is_configured": true, 00:16:31.851 "data_offset": 256, 00:16:31.851 "data_size": 7936 00:16:31.851 }, 00:16:31.851 { 00:16:31.851 "name": "BaseBdev2", 00:16:31.851 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:31.851 "is_configured": true, 00:16:31.851 "data_offset": 256, 00:16:31.851 "data_size": 7936 00:16:31.851 } 00:16:31.851 ] 00:16:31.851 }' 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.851 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.420 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.420 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:32.421 [2024-11-26 12:59:49.800303] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.421 [2024-11-26 12:59:49.879941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.421 "name": "raid_bdev1", 00:16:32.421 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:32.421 "strip_size_kb": 0, 00:16:32.421 "state": "online", 00:16:32.421 "raid_level": "raid1", 00:16:32.421 "superblock": true, 00:16:32.421 "num_base_bdevs": 2, 00:16:32.421 "num_base_bdevs_discovered": 1, 00:16:32.421 "num_base_bdevs_operational": 1, 00:16:32.421 "base_bdevs_list": [ 00:16:32.421 { 00:16:32.421 "name": null, 00:16:32.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.421 "is_configured": false, 00:16:32.421 "data_offset": 0, 00:16:32.421 "data_size": 7936 00:16:32.421 }, 00:16:32.421 { 00:16:32.421 "name": "BaseBdev2", 00:16:32.421 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:32.421 "is_configured": true, 00:16:32.421 "data_offset": 256, 00:16:32.421 "data_size": 7936 00:16:32.421 } 00:16:32.421 ] 00:16:32.421 }' 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.421 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.680 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.680 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.680 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.680 [2024-11-26 12:59:50.287391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.680 [2024-11-26 12:59:50.290349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:32.680 [2024-11-26 12:59:50.292311] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:32.680 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.680 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.062 "name": "raid_bdev1", 00:16:34.062 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:34.062 "strip_size_kb": 0, 00:16:34.062 "state": "online", 00:16:34.062 "raid_level": "raid1", 00:16:34.062 "superblock": true, 00:16:34.062 "num_base_bdevs": 2, 00:16:34.062 "num_base_bdevs_discovered": 2, 00:16:34.062 "num_base_bdevs_operational": 2, 00:16:34.062 "process": { 00:16:34.062 "type": "rebuild", 00:16:34.062 "target": "spare", 00:16:34.062 "progress": { 00:16:34.062 "blocks": 2560, 00:16:34.062 "percent": 32 00:16:34.062 } 00:16:34.062 }, 00:16:34.062 "base_bdevs_list": [ 00:16:34.062 { 00:16:34.062 "name": "spare", 00:16:34.062 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:34.062 "is_configured": true, 00:16:34.062 "data_offset": 256, 00:16:34.062 "data_size": 7936 00:16:34.062 }, 00:16:34.062 { 00:16:34.062 "name": "BaseBdev2", 00:16:34.062 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:34.062 "is_configured": true, 00:16:34.062 "data_offset": 256, 00:16:34.062 "data_size": 7936 00:16:34.062 } 00:16:34.062 ] 00:16:34.062 }' 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.062 [2024-11-26 12:59:51.455046] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.062 [2024-11-26 12:59:51.497122] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:34.062 [2024-11-26 12:59:51.497235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.062 [2024-11-26 12:59:51.497273] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.062 [2024-11-26 12:59:51.497310] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.062 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.063 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.063 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.063 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.063 "name": "raid_bdev1", 00:16:34.063 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:34.063 "strip_size_kb": 0, 00:16:34.063 "state": "online", 00:16:34.063 "raid_level": "raid1", 00:16:34.063 "superblock": true, 00:16:34.063 "num_base_bdevs": 2, 00:16:34.063 "num_base_bdevs_discovered": 1, 00:16:34.063 "num_base_bdevs_operational": 1, 00:16:34.063 "base_bdevs_list": [ 00:16:34.063 { 00:16:34.063 "name": null, 00:16:34.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.063 "is_configured": false, 00:16:34.063 "data_offset": 0, 00:16:34.063 "data_size": 7936 00:16:34.063 }, 00:16:34.063 { 00:16:34.063 "name": "BaseBdev2", 00:16:34.063 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:34.063 "is_configured": true, 00:16:34.063 "data_offset": 256, 00:16:34.063 "data_size": 7936 00:16:34.063 } 00:16:34.063 ] 00:16:34.063 }' 00:16:34.063 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.063 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.322 "name": "raid_bdev1", 00:16:34.322 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:34.322 "strip_size_kb": 0, 00:16:34.322 "state": "online", 00:16:34.322 "raid_level": "raid1", 00:16:34.322 "superblock": true, 00:16:34.322 "num_base_bdevs": 2, 00:16:34.322 "num_base_bdevs_discovered": 1, 00:16:34.322 "num_base_bdevs_operational": 1, 00:16:34.322 "base_bdevs_list": [ 00:16:34.322 { 00:16:34.322 "name": null, 00:16:34.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.322 "is_configured": false, 00:16:34.322 "data_offset": 0, 00:16:34.322 "data_size": 7936 00:16:34.322 }, 00:16:34.322 { 00:16:34.322 "name": "BaseBdev2", 00:16:34.322 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:34.322 "is_configured": true, 00:16:34.322 "data_offset": 256, 00:16:34.322 "data_size": 7936 00:16:34.322 } 00:16:34.322 ] 00:16:34.322 }' 00:16:34.322 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.580 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.580 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.580 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.580 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:34.580 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.580 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.580 [2024-11-26 12:59:52.067687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.580 [2024-11-26 12:59:52.070284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:34.580 [2024-11-26 12:59:52.072140] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.581 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.581 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.553 "name": "raid_bdev1", 00:16:35.553 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:35.553 "strip_size_kb": 0, 00:16:35.553 "state": "online", 00:16:35.553 "raid_level": "raid1", 00:16:35.553 "superblock": true, 00:16:35.553 "num_base_bdevs": 2, 00:16:35.553 "num_base_bdevs_discovered": 2, 00:16:35.553 "num_base_bdevs_operational": 2, 00:16:35.553 "process": { 00:16:35.553 "type": "rebuild", 00:16:35.553 "target": "spare", 00:16:35.553 "progress": { 00:16:35.553 "blocks": 2560, 00:16:35.553 "percent": 32 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 "base_bdevs_list": [ 00:16:35.553 { 00:16:35.553 "name": "spare", 00:16:35.553 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:35.553 "is_configured": true, 00:16:35.553 "data_offset": 256, 00:16:35.553 "data_size": 7936 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "name": "BaseBdev2", 00:16:35.553 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:35.553 "is_configured": true, 00:16:35.553 "data_offset": 256, 00:16:35.553 "data_size": 7936 00:16:35.553 } 00:16:35.553 ] 00:16:35.553 }' 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:35.553 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:35.553 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=617 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.813 "name": "raid_bdev1", 00:16:35.813 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:35.813 "strip_size_kb": 0, 00:16:35.813 "state": "online", 00:16:35.813 "raid_level": "raid1", 00:16:35.813 "superblock": true, 00:16:35.813 "num_base_bdevs": 2, 00:16:35.813 "num_base_bdevs_discovered": 2, 00:16:35.813 "num_base_bdevs_operational": 2, 00:16:35.813 "process": { 00:16:35.813 "type": "rebuild", 00:16:35.813 "target": "spare", 00:16:35.813 "progress": { 00:16:35.813 "blocks": 2816, 00:16:35.813 "percent": 35 00:16:35.813 } 00:16:35.813 }, 00:16:35.813 "base_bdevs_list": [ 00:16:35.813 { 00:16:35.813 "name": "spare", 00:16:35.813 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:35.813 "is_configured": true, 00:16:35.813 "data_offset": 256, 00:16:35.813 "data_size": 7936 00:16:35.813 }, 00:16:35.813 { 00:16:35.813 "name": "BaseBdev2", 00:16:35.813 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:35.813 "is_configured": true, 00:16:35.813 "data_offset": 256, 00:16:35.813 "data_size": 7936 00:16:35.813 } 00:16:35.813 ] 00:16:35.813 }' 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.813 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.753 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.753 "name": "raid_bdev1", 00:16:36.753 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:36.753 "strip_size_kb": 0, 00:16:36.753 "state": "online", 00:16:36.754 "raid_level": "raid1", 00:16:36.754 "superblock": true, 00:16:36.754 "num_base_bdevs": 2, 00:16:36.754 "num_base_bdevs_discovered": 2, 00:16:36.754 "num_base_bdevs_operational": 2, 00:16:36.754 "process": { 00:16:36.754 "type": "rebuild", 00:16:36.754 "target": "spare", 00:16:36.754 "progress": { 00:16:36.754 "blocks": 5888, 00:16:36.754 "percent": 74 00:16:36.754 } 00:16:36.754 }, 00:16:36.754 "base_bdevs_list": [ 00:16:36.754 { 00:16:36.754 "name": "spare", 00:16:36.754 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:36.754 "is_configured": true, 00:16:36.754 "data_offset": 256, 00:16:36.754 "data_size": 7936 00:16:36.754 }, 00:16:36.754 { 00:16:36.754 "name": "BaseBdev2", 00:16:36.754 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:36.754 "is_configured": true, 00:16:36.754 "data_offset": 256, 00:16:36.754 "data_size": 7936 00:16:36.754 } 00:16:36.754 ] 00:16:36.754 }' 00:16:36.754 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.014 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.014 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.014 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.014 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.732 [2024-11-26 12:59:55.182944] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:37.732 [2024-11-26 12:59:55.183021] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:37.732 [2024-11-26 12:59:55.183115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.992 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.993 "name": "raid_bdev1", 00:16:37.993 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:37.993 "strip_size_kb": 0, 00:16:37.993 "state": "online", 00:16:37.993 "raid_level": "raid1", 00:16:37.993 "superblock": true, 00:16:37.993 "num_base_bdevs": 2, 00:16:37.993 "num_base_bdevs_discovered": 2, 00:16:37.993 "num_base_bdevs_operational": 2, 00:16:37.993 "base_bdevs_list": [ 00:16:37.993 { 00:16:37.993 "name": "spare", 00:16:37.993 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:37.993 "is_configured": true, 00:16:37.993 "data_offset": 256, 00:16:37.993 "data_size": 7936 00:16:37.993 }, 00:16:37.993 { 00:16:37.993 "name": "BaseBdev2", 00:16:37.993 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:37.993 "is_configured": true, 00:16:37.993 "data_offset": 256, 00:16:37.993 "data_size": 7936 00:16:37.993 } 00:16:37.993 ] 00:16:37.993 }' 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.993 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.253 "name": "raid_bdev1", 00:16:38.253 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:38.253 "strip_size_kb": 0, 00:16:38.253 "state": "online", 00:16:38.253 "raid_level": "raid1", 00:16:38.253 "superblock": true, 00:16:38.253 "num_base_bdevs": 2, 00:16:38.253 "num_base_bdevs_discovered": 2, 00:16:38.253 "num_base_bdevs_operational": 2, 00:16:38.253 "base_bdevs_list": [ 00:16:38.253 { 00:16:38.253 "name": "spare", 00:16:38.253 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:38.253 "is_configured": true, 00:16:38.253 "data_offset": 256, 00:16:38.253 "data_size": 7936 00:16:38.253 }, 00:16:38.253 { 00:16:38.253 "name": "BaseBdev2", 00:16:38.253 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:38.253 "is_configured": true, 00:16:38.253 "data_offset": 256, 00:16:38.253 "data_size": 7936 00:16:38.253 } 00:16:38.253 ] 00:16:38.253 }' 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.253 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.253 "name": "raid_bdev1", 00:16:38.253 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:38.253 "strip_size_kb": 0, 00:16:38.253 "state": "online", 00:16:38.253 "raid_level": "raid1", 00:16:38.253 "superblock": true, 00:16:38.253 "num_base_bdevs": 2, 00:16:38.253 "num_base_bdevs_discovered": 2, 00:16:38.253 "num_base_bdevs_operational": 2, 00:16:38.253 "base_bdevs_list": [ 00:16:38.254 { 00:16:38.254 "name": "spare", 00:16:38.254 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:38.254 "is_configured": true, 00:16:38.254 "data_offset": 256, 00:16:38.254 "data_size": 7936 00:16:38.254 }, 00:16:38.254 { 00:16:38.254 "name": "BaseBdev2", 00:16:38.254 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:38.254 "is_configured": true, 00:16:38.254 "data_offset": 256, 00:16:38.254 "data_size": 7936 00:16:38.254 } 00:16:38.254 ] 00:16:38.254 }' 00:16:38.254 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.254 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 [2024-11-26 12:59:56.213041] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.824 [2024-11-26 12:59:56.213069] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.824 [2024-11-26 12:59:56.213146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.824 [2024-11-26 12:59:56.213224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.824 [2024-11-26 12:59:56.213247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 [2024-11-26 12:59:56.288907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.824 [2024-11-26 12:59:56.288963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.824 [2024-11-26 12:59:56.288982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:38.824 [2024-11-26 12:59:56.288992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.824 [2024-11-26 12:59:56.290929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.824 [2024-11-26 12:59:56.290971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.824 [2024-11-26 12:59:56.291030] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:38.824 [2024-11-26 12:59:56.291075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.824 [2024-11-26 12:59:56.291160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.824 spare 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 [2024-11-26 12:59:56.391063] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:38.824 [2024-11-26 12:59:56.391087] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:38.824 [2024-11-26 12:59:56.391169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:38.824 [2024-11-26 12:59:56.391283] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:38.824 [2024-11-26 12:59:56.391294] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:38.824 [2024-11-26 12:59:56.391360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.824 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.824 "name": "raid_bdev1", 00:16:38.824 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:38.824 "strip_size_kb": 0, 00:16:38.824 "state": "online", 00:16:38.824 "raid_level": "raid1", 00:16:38.824 "superblock": true, 00:16:38.824 "num_base_bdevs": 2, 00:16:38.824 "num_base_bdevs_discovered": 2, 00:16:38.824 "num_base_bdevs_operational": 2, 00:16:38.824 "base_bdevs_list": [ 00:16:38.824 { 00:16:38.824 "name": "spare", 00:16:38.824 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:38.824 "is_configured": true, 00:16:38.824 "data_offset": 256, 00:16:38.824 "data_size": 7936 00:16:38.824 }, 00:16:38.824 { 00:16:38.824 "name": "BaseBdev2", 00:16:38.824 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:38.824 "is_configured": true, 00:16:38.824 "data_offset": 256, 00:16:38.824 "data_size": 7936 00:16:38.824 } 00:16:38.824 ] 00:16:38.824 }' 00:16:38.825 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.825 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.394 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.394 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.394 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.394 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.394 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.394 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.394 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.394 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.394 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.395 "name": "raid_bdev1", 00:16:39.395 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:39.395 "strip_size_kb": 0, 00:16:39.395 "state": "online", 00:16:39.395 "raid_level": "raid1", 00:16:39.395 "superblock": true, 00:16:39.395 "num_base_bdevs": 2, 00:16:39.395 "num_base_bdevs_discovered": 2, 00:16:39.395 "num_base_bdevs_operational": 2, 00:16:39.395 "base_bdevs_list": [ 00:16:39.395 { 00:16:39.395 "name": "spare", 00:16:39.395 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:39.395 "is_configured": true, 00:16:39.395 "data_offset": 256, 00:16:39.395 "data_size": 7936 00:16:39.395 }, 00:16:39.395 { 00:16:39.395 "name": "BaseBdev2", 00:16:39.395 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:39.395 "is_configured": true, 00:16:39.395 "data_offset": 256, 00:16:39.395 "data_size": 7936 00:16:39.395 } 00:16:39.395 ] 00:16:39.395 }' 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.395 [2024-11-26 12:59:56.991935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.395 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.395 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.395 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.395 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.395 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.395 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.395 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.395 "name": "raid_bdev1", 00:16:39.395 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:39.395 "strip_size_kb": 0, 00:16:39.395 "state": "online", 00:16:39.395 "raid_level": "raid1", 00:16:39.395 "superblock": true, 00:16:39.395 "num_base_bdevs": 2, 00:16:39.395 "num_base_bdevs_discovered": 1, 00:16:39.395 "num_base_bdevs_operational": 1, 00:16:39.395 "base_bdevs_list": [ 00:16:39.395 { 00:16:39.395 "name": null, 00:16:39.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.395 "is_configured": false, 00:16:39.395 "data_offset": 0, 00:16:39.395 "data_size": 7936 00:16:39.395 }, 00:16:39.395 { 00:16:39.395 "name": "BaseBdev2", 00:16:39.395 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:39.395 "is_configured": true, 00:16:39.395 "data_offset": 256, 00:16:39.395 "data_size": 7936 00:16:39.395 } 00:16:39.395 ] 00:16:39.395 }' 00:16:39.395 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.395 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.964 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.964 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.964 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.964 [2024-11-26 12:59:57.423295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.964 [2024-11-26 12:59:57.423422] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:39.964 [2024-11-26 12:59:57.423440] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:39.964 [2024-11-26 12:59:57.423474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.964 [2024-11-26 12:59:57.426166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:39.964 [2024-11-26 12:59:57.427998] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.964 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.964 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.903 "name": "raid_bdev1", 00:16:40.903 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:40.903 "strip_size_kb": 0, 00:16:40.903 "state": "online", 00:16:40.903 "raid_level": "raid1", 00:16:40.903 "superblock": true, 00:16:40.903 "num_base_bdevs": 2, 00:16:40.903 "num_base_bdevs_discovered": 2, 00:16:40.903 "num_base_bdevs_operational": 2, 00:16:40.903 "process": { 00:16:40.903 "type": "rebuild", 00:16:40.903 "target": "spare", 00:16:40.903 "progress": { 00:16:40.903 "blocks": 2560, 00:16:40.903 "percent": 32 00:16:40.903 } 00:16:40.903 }, 00:16:40.903 "base_bdevs_list": [ 00:16:40.903 { 00:16:40.903 "name": "spare", 00:16:40.903 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:40.903 "is_configured": true, 00:16:40.903 "data_offset": 256, 00:16:40.903 "data_size": 7936 00:16:40.903 }, 00:16:40.903 { 00:16:40.903 "name": "BaseBdev2", 00:16:40.903 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:40.903 "is_configured": true, 00:16:40.903 "data_offset": 256, 00:16:40.903 "data_size": 7936 00:16:40.903 } 00:16:40.903 ] 00:16:40.903 }' 00:16:40.903 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.904 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.904 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.164 [2024-11-26 12:59:58.591170] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:41.164 [2024-11-26 12:59:58.631945] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:41.164 [2024-11-26 12:59:58.631992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.164 [2024-11-26 12:59:58.632024] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:41.164 [2024-11-26 12:59:58.632031] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.164 "name": "raid_bdev1", 00:16:41.164 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:41.164 "strip_size_kb": 0, 00:16:41.164 "state": "online", 00:16:41.164 "raid_level": "raid1", 00:16:41.164 "superblock": true, 00:16:41.164 "num_base_bdevs": 2, 00:16:41.164 "num_base_bdevs_discovered": 1, 00:16:41.164 "num_base_bdevs_operational": 1, 00:16:41.164 "base_bdevs_list": [ 00:16:41.164 { 00:16:41.164 "name": null, 00:16:41.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.164 "is_configured": false, 00:16:41.164 "data_offset": 0, 00:16:41.164 "data_size": 7936 00:16:41.164 }, 00:16:41.164 { 00:16:41.164 "name": "BaseBdev2", 00:16:41.164 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:41.164 "is_configured": true, 00:16:41.164 "data_offset": 256, 00:16:41.164 "data_size": 7936 00:16:41.164 } 00:16:41.164 ] 00:16:41.164 }' 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.164 12:59:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.423 12:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.423 12:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.423 12:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.423 [2024-11-26 12:59:59.090292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.423 [2024-11-26 12:59:59.090337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.423 [2024-11-26 12:59:59.090360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:41.423 [2024-11-26 12:59:59.090369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.423 [2024-11-26 12:59:59.090543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.423 [2024-11-26 12:59:59.090556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.423 [2024-11-26 12:59:59.090603] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:41.423 [2024-11-26 12:59:59.090613] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.423 [2024-11-26 12:59:59.090623] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:41.423 [2024-11-26 12:59:59.090643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.423 [2024-11-26 12:59:59.092965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:41.423 [2024-11-26 12:59:59.094774] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:41.423 spare 00:16:41.423 12:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.423 12:59:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:42.802 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.802 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.802 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.802 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.802 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.802 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.802 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.802 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.802 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.803 "name": "raid_bdev1", 00:16:42.803 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:42.803 "strip_size_kb": 0, 00:16:42.803 "state": "online", 00:16:42.803 "raid_level": "raid1", 00:16:42.803 "superblock": true, 00:16:42.803 "num_base_bdevs": 2, 00:16:42.803 "num_base_bdevs_discovered": 2, 00:16:42.803 "num_base_bdevs_operational": 2, 00:16:42.803 "process": { 00:16:42.803 "type": "rebuild", 00:16:42.803 "target": "spare", 00:16:42.803 "progress": { 00:16:42.803 "blocks": 2560, 00:16:42.803 "percent": 32 00:16:42.803 } 00:16:42.803 }, 00:16:42.803 "base_bdevs_list": [ 00:16:42.803 { 00:16:42.803 "name": "spare", 00:16:42.803 "uuid": "f9e72ff6-b029-5d22-a86b-c5ec91792d03", 00:16:42.803 "is_configured": true, 00:16:42.803 "data_offset": 256, 00:16:42.803 "data_size": 7936 00:16:42.803 }, 00:16:42.803 { 00:16:42.803 "name": "BaseBdev2", 00:16:42.803 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:42.803 "is_configured": true, 00:16:42.803 "data_offset": 256, 00:16:42.803 "data_size": 7936 00:16:42.803 } 00:16:42.803 ] 00:16:42.803 }' 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.803 [2024-11-26 13:00:00.245485] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.803 [2024-11-26 13:00:00.298692] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:42.803 [2024-11-26 13:00:00.298761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.803 [2024-11-26 13:00:00.298775] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.803 [2024-11-26 13:00:00.298784] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.803 "name": "raid_bdev1", 00:16:42.803 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:42.803 "strip_size_kb": 0, 00:16:42.803 "state": "online", 00:16:42.803 "raid_level": "raid1", 00:16:42.803 "superblock": true, 00:16:42.803 "num_base_bdevs": 2, 00:16:42.803 "num_base_bdevs_discovered": 1, 00:16:42.803 "num_base_bdevs_operational": 1, 00:16:42.803 "base_bdevs_list": [ 00:16:42.803 { 00:16:42.803 "name": null, 00:16:42.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.803 "is_configured": false, 00:16:42.803 "data_offset": 0, 00:16:42.803 "data_size": 7936 00:16:42.803 }, 00:16:42.803 { 00:16:42.803 "name": "BaseBdev2", 00:16:42.803 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:42.803 "is_configured": true, 00:16:42.803 "data_offset": 256, 00:16:42.803 "data_size": 7936 00:16:42.803 } 00:16:42.803 ] 00:16:42.803 }' 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.803 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.373 "name": "raid_bdev1", 00:16:43.373 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:43.373 "strip_size_kb": 0, 00:16:43.373 "state": "online", 00:16:43.373 "raid_level": "raid1", 00:16:43.373 "superblock": true, 00:16:43.373 "num_base_bdevs": 2, 00:16:43.373 "num_base_bdevs_discovered": 1, 00:16:43.373 "num_base_bdevs_operational": 1, 00:16:43.373 "base_bdevs_list": [ 00:16:43.373 { 00:16:43.373 "name": null, 00:16:43.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.373 "is_configured": false, 00:16:43.373 "data_offset": 0, 00:16:43.373 "data_size": 7936 00:16:43.373 }, 00:16:43.373 { 00:16:43.373 "name": "BaseBdev2", 00:16:43.373 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:43.373 "is_configured": true, 00:16:43.373 "data_offset": 256, 00:16:43.373 "data_size": 7936 00:16:43.373 } 00:16:43.373 ] 00:16:43.373 }' 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.373 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.374 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.374 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:43.374 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.374 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.374 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.374 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:43.374 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.374 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.374 [2024-11-26 13:00:00.912933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:43.374 [2024-11-26 13:00:00.912983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.374 [2024-11-26 13:00:00.913000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:43.374 [2024-11-26 13:00:00.913012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.374 [2024-11-26 13:00:00.913148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.374 [2024-11-26 13:00:00.913161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:43.374 [2024-11-26 13:00:00.913214] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:43.374 [2024-11-26 13:00:00.913254] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.374 [2024-11-26 13:00:00.913261] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:43.374 [2024-11-26 13:00:00.913274] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:43.374 BaseBdev1 00:16:43.374 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.374 13:00:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.313 "name": "raid_bdev1", 00:16:44.313 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:44.313 "strip_size_kb": 0, 00:16:44.313 "state": "online", 00:16:44.313 "raid_level": "raid1", 00:16:44.313 "superblock": true, 00:16:44.313 "num_base_bdevs": 2, 00:16:44.313 "num_base_bdevs_discovered": 1, 00:16:44.313 "num_base_bdevs_operational": 1, 00:16:44.313 "base_bdevs_list": [ 00:16:44.313 { 00:16:44.313 "name": null, 00:16:44.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.313 "is_configured": false, 00:16:44.313 "data_offset": 0, 00:16:44.313 "data_size": 7936 00:16:44.313 }, 00:16:44.313 { 00:16:44.313 "name": "BaseBdev2", 00:16:44.313 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:44.313 "is_configured": true, 00:16:44.313 "data_offset": 256, 00:16:44.313 "data_size": 7936 00:16:44.313 } 00:16:44.313 ] 00:16:44.313 }' 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.313 13:00:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.879 "name": "raid_bdev1", 00:16:44.879 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:44.879 "strip_size_kb": 0, 00:16:44.879 "state": "online", 00:16:44.879 "raid_level": "raid1", 00:16:44.879 "superblock": true, 00:16:44.879 "num_base_bdevs": 2, 00:16:44.879 "num_base_bdevs_discovered": 1, 00:16:44.879 "num_base_bdevs_operational": 1, 00:16:44.879 "base_bdevs_list": [ 00:16:44.879 { 00:16:44.879 "name": null, 00:16:44.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.879 "is_configured": false, 00:16:44.879 "data_offset": 0, 00:16:44.879 "data_size": 7936 00:16:44.879 }, 00:16:44.879 { 00:16:44.879 "name": "BaseBdev2", 00:16:44.879 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:44.879 "is_configured": true, 00:16:44.879 "data_offset": 256, 00:16:44.879 "data_size": 7936 00:16:44.879 } 00:16:44.879 ] 00:16:44.879 }' 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.879 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.879 [2024-11-26 13:00:02.510497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.879 [2024-11-26 13:00:02.510625] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:44.879 [2024-11-26 13:00:02.510641] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:44.879 request: 00:16:44.879 { 00:16:44.879 "base_bdev": "BaseBdev1", 00:16:44.879 "raid_bdev": "raid_bdev1", 00:16:44.879 "method": "bdev_raid_add_base_bdev", 00:16:44.879 "req_id": 1 00:16:44.880 } 00:16:44.880 Got JSON-RPC error response 00:16:44.880 response: 00:16:44.880 { 00:16:44.880 "code": -22, 00:16:44.880 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:44.880 } 00:16:44.880 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:44.880 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:44.880 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:44.880 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:44.880 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:44.880 13:00:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.258 "name": "raid_bdev1", 00:16:46.258 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:46.258 "strip_size_kb": 0, 00:16:46.258 "state": "online", 00:16:46.258 "raid_level": "raid1", 00:16:46.258 "superblock": true, 00:16:46.258 "num_base_bdevs": 2, 00:16:46.258 "num_base_bdevs_discovered": 1, 00:16:46.258 "num_base_bdevs_operational": 1, 00:16:46.258 "base_bdevs_list": [ 00:16:46.258 { 00:16:46.258 "name": null, 00:16:46.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.258 "is_configured": false, 00:16:46.258 "data_offset": 0, 00:16:46.258 "data_size": 7936 00:16:46.258 }, 00:16:46.258 { 00:16:46.258 "name": "BaseBdev2", 00:16:46.258 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:46.258 "is_configured": true, 00:16:46.258 "data_offset": 256, 00:16:46.258 "data_size": 7936 00:16:46.258 } 00:16:46.258 ] 00:16:46.258 }' 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.258 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.517 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.517 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.517 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.517 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.517 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.517 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.518 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.518 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.518 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.518 13:00:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.518 "name": "raid_bdev1", 00:16:46.518 "uuid": "4db0de34-caa4-4d14-a677-181640916e04", 00:16:46.518 "strip_size_kb": 0, 00:16:46.518 "state": "online", 00:16:46.518 "raid_level": "raid1", 00:16:46.518 "superblock": true, 00:16:46.518 "num_base_bdevs": 2, 00:16:46.518 "num_base_bdevs_discovered": 1, 00:16:46.518 "num_base_bdevs_operational": 1, 00:16:46.518 "base_bdevs_list": [ 00:16:46.518 { 00:16:46.518 "name": null, 00:16:46.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.518 "is_configured": false, 00:16:46.518 "data_offset": 0, 00:16:46.518 "data_size": 7936 00:16:46.518 }, 00:16:46.518 { 00:16:46.518 "name": "BaseBdev2", 00:16:46.518 "uuid": "471a5b47-94b5-5ce5-9c27-10c95f405c90", 00:16:46.518 "is_configured": true, 00:16:46.518 "data_offset": 256, 00:16:46.518 "data_size": 7936 00:16:46.518 } 00:16:46.518 ] 00:16:46.518 }' 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99556 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99556 ']' 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99556 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99556 00:16:46.518 killing process with pid 99556 00:16:46.518 Received shutdown signal, test time was about 60.000000 seconds 00:16:46.518 00:16:46.518 Latency(us) 00:16:46.518 [2024-11-26T13:00:04.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.518 [2024-11-26T13:00:04.202Z] =================================================================================================================== 00:16:46.518 [2024-11-26T13:00:04.202Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99556' 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99556 00:16:46.518 [2024-11-26 13:00:04.151648] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.518 [2024-11-26 13:00:04.151740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.518 [2024-11-26 13:00:04.151792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.518 [2024-11-26 13:00:04.151800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:46.518 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99556 00:16:46.518 [2024-11-26 13:00:04.184596] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.778 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:46.778 00:16:46.778 real 0m16.112s 00:16:46.778 user 0m21.482s 00:16:46.778 sys 0m1.704s 00:16:46.778 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.778 13:00:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.778 ************************************ 00:16:46.778 END TEST raid_rebuild_test_sb_md_interleaved 00:16:46.778 ************************************ 00:16:47.039 13:00:04 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:47.039 13:00:04 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:47.039 13:00:04 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99556 ']' 00:16:47.039 13:00:04 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99556 00:16:47.039 13:00:04 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:47.039 00:16:47.039 real 9m57.727s 00:16:47.039 user 14m4.924s 00:16:47.039 sys 1m50.824s 00:16:47.039 13:00:04 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:47.039 13:00:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.039 ************************************ 00:16:47.039 END TEST bdev_raid 00:16:47.039 ************************************ 00:16:47.039 13:00:04 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:47.039 13:00:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:47.039 13:00:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.039 13:00:04 -- common/autotest_common.sh@10 -- # set +x 00:16:47.039 ************************************ 00:16:47.039 START TEST spdkcli_raid 00:16:47.039 ************************************ 00:16:47.039 13:00:04 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:47.300 * Looking for test storage... 00:16:47.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.300 13:00:04 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:47.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.300 --rc genhtml_branch_coverage=1 00:16:47.300 --rc genhtml_function_coverage=1 00:16:47.300 --rc genhtml_legend=1 00:16:47.300 --rc geninfo_all_blocks=1 00:16:47.300 --rc geninfo_unexecuted_blocks=1 00:16:47.300 00:16:47.300 ' 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:47.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.300 --rc genhtml_branch_coverage=1 00:16:47.300 --rc genhtml_function_coverage=1 00:16:47.300 --rc genhtml_legend=1 00:16:47.300 --rc geninfo_all_blocks=1 00:16:47.300 --rc geninfo_unexecuted_blocks=1 00:16:47.300 00:16:47.300 ' 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:47.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.300 --rc genhtml_branch_coverage=1 00:16:47.300 --rc genhtml_function_coverage=1 00:16:47.300 --rc genhtml_legend=1 00:16:47.300 --rc geninfo_all_blocks=1 00:16:47.300 --rc geninfo_unexecuted_blocks=1 00:16:47.300 00:16:47.300 ' 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:47.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.300 --rc genhtml_branch_coverage=1 00:16:47.300 --rc genhtml_function_coverage=1 00:16:47.300 --rc genhtml_legend=1 00:16:47.300 --rc geninfo_all_blocks=1 00:16:47.300 --rc geninfo_unexecuted_blocks=1 00:16:47.300 00:16:47.300 ' 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:47.300 13:00:04 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100223 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:47.300 13:00:04 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100223 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100223 ']' 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.300 13:00:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.300 [2024-11-26 13:00:04.961939] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:47.300 [2024-11-26 13:00:04.962054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100223 ] 00:16:47.561 [2024-11-26 13:00:05.122798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:47.561 [2024-11-26 13:00:05.172687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.561 [2024-11-26 13:00:05.172719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.131 13:00:05 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.131 13:00:05 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:16:48.131 13:00:05 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:48.131 13:00:05 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:48.131 13:00:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.131 13:00:05 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:48.131 13:00:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:48.391 13:00:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.391 13:00:05 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:48.391 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:48.391 ' 00:16:49.771 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:49.771 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:49.771 13:00:07 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:49.771 13:00:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:49.771 13:00:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.031 13:00:07 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:50.031 13:00:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:50.031 13:00:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.031 13:00:07 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:50.031 ' 00:16:50.971 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:50.971 13:00:08 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:50.971 13:00:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:50.971 13:00:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.231 13:00:08 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:51.231 13:00:08 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.231 13:00:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.231 13:00:08 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:51.231 13:00:08 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:51.799 13:00:09 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:51.799 13:00:09 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:51.799 13:00:09 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:51.799 13:00:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:51.799 13:00:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.799 13:00:09 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:51.799 13:00:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.799 13:00:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.799 13:00:09 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:51.799 ' 00:16:52.738 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:52.738 13:00:10 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:52.738 13:00:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.738 13:00:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.997 13:00:10 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:52.997 13:00:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:52.997 13:00:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.997 13:00:10 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:52.997 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:52.997 ' 00:16:54.378 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:54.379 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:54.379 13:00:11 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.379 13:00:11 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100223 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100223 ']' 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100223 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100223 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:54.379 killing process with pid 100223 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100223' 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100223 00:16:54.379 13:00:11 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100223 00:16:54.948 13:00:12 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:54.948 13:00:12 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100223 ']' 00:16:54.948 13:00:12 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100223 00:16:54.948 13:00:12 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100223 ']' 00:16:54.948 13:00:12 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100223 00:16:54.948 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100223) - No such process 00:16:54.948 Process with pid 100223 is not found 00:16:54.948 13:00:12 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100223 is not found' 00:16:54.948 13:00:12 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:54.948 13:00:12 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:54.948 13:00:12 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:54.948 13:00:12 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:54.948 00:16:54.948 real 0m7.778s 00:16:54.948 user 0m16.322s 00:16:54.948 sys 0m1.178s 00:16:54.948 13:00:12 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:54.948 13:00:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.948 ************************************ 00:16:54.948 END TEST spdkcli_raid 00:16:54.948 ************************************ 00:16:54.948 13:00:12 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:54.948 13:00:12 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:54.948 13:00:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:54.948 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:16:54.948 ************************************ 00:16:54.948 START TEST blockdev_raid5f 00:16:54.948 ************************************ 00:16:54.948 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:54.948 * Looking for test storage... 00:16:54.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:54.948 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:54.948 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:16:54.948 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:55.208 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.208 13:00:12 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:55.208 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.208 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:55.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.208 --rc genhtml_branch_coverage=1 00:16:55.208 --rc genhtml_function_coverage=1 00:16:55.208 --rc genhtml_legend=1 00:16:55.208 --rc geninfo_all_blocks=1 00:16:55.208 --rc geninfo_unexecuted_blocks=1 00:16:55.208 00:16:55.208 ' 00:16:55.208 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:55.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.208 --rc genhtml_branch_coverage=1 00:16:55.208 --rc genhtml_function_coverage=1 00:16:55.208 --rc genhtml_legend=1 00:16:55.208 --rc geninfo_all_blocks=1 00:16:55.208 --rc geninfo_unexecuted_blocks=1 00:16:55.208 00:16:55.208 ' 00:16:55.208 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:55.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.208 --rc genhtml_branch_coverage=1 00:16:55.208 --rc genhtml_function_coverage=1 00:16:55.208 --rc genhtml_legend=1 00:16:55.208 --rc geninfo_all_blocks=1 00:16:55.208 --rc geninfo_unexecuted_blocks=1 00:16:55.208 00:16:55.208 ' 00:16:55.208 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:55.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.208 --rc genhtml_branch_coverage=1 00:16:55.208 --rc genhtml_function_coverage=1 00:16:55.208 --rc genhtml_legend=1 00:16:55.208 --rc geninfo_all_blocks=1 00:16:55.208 --rc geninfo_unexecuted_blocks=1 00:16:55.208 00:16:55.208 ' 00:16:55.208 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:55.208 13:00:12 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:55.208 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:55.208 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:55.208 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:55.208 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100481 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:55.209 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100481 00:16:55.209 13:00:12 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100481 ']' 00:16:55.209 13:00:12 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.209 13:00:12 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.209 13:00:12 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.209 13:00:12 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.209 13:00:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:55.209 [2024-11-26 13:00:12.787960] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:55.209 [2024-11-26 13:00:12.788074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100481 ] 00:16:55.469 [2024-11-26 13:00:12.949215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.469 [2024-11-26 13:00:12.997327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:16:56.039 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:56.039 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:56.039 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:56.039 Malloc0 00:16:56.039 Malloc1 00:16:56.039 Malloc2 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.039 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.039 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:56.039 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.039 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:56.039 13:00:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.040 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:56.040 13:00:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.040 13:00:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.300 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:56.300 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:56.300 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.300 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:56.300 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "50a11816-386e-44fc-8e0c-bd43ff6c8f00"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "50a11816-386e-44fc-8e0c-bd43ff6c8f00",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "50a11816-386e-44fc-8e0c-bd43ff6c8f00",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "73f1da78-847f-4edd-be70-ef21d5640868",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "421afb10-ddf5-490e-87b5-8738d168cd62",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "85130b52-7bbf-4d24-9eb1-380cbc5b419e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:56.300 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:56.300 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:56.300 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:16:56.300 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:56.300 13:00:13 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100481 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100481 ']' 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100481 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100481 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:56.300 killing process with pid 100481 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100481' 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100481 00:16:56.300 13:00:13 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100481 00:16:56.871 13:00:14 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:56.872 13:00:14 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:56.872 13:00:14 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:56.872 13:00:14 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:56.872 13:00:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:56.872 ************************************ 00:16:56.872 START TEST bdev_hello_world 00:16:56.872 ************************************ 00:16:56.872 13:00:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:56.872 [2024-11-26 13:00:14.380857] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:56.872 [2024-11-26 13:00:14.380998] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100516 ] 00:16:56.872 [2024-11-26 13:00:14.545359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.131 [2024-11-26 13:00:14.595302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.132 [2024-11-26 13:00:14.793673] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:57.132 [2024-11-26 13:00:14.793732] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:57.132 [2024-11-26 13:00:14.793752] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:57.132 [2024-11-26 13:00:14.794151] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:57.132 [2024-11-26 13:00:14.794343] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:57.132 [2024-11-26 13:00:14.794370] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:57.132 [2024-11-26 13:00:14.794430] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:57.132 00:16:57.132 [2024-11-26 13:00:14.794460] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:57.392 00:16:57.392 real 0m0.759s 00:16:57.392 user 0m0.406s 00:16:57.392 sys 0m0.237s 00:16:57.392 13:00:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.392 13:00:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:57.392 ************************************ 00:16:57.392 END TEST bdev_hello_world 00:16:57.392 ************************************ 00:16:57.653 13:00:15 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:57.653 13:00:15 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:57.653 13:00:15 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.653 13:00:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 ************************************ 00:16:57.653 START TEST bdev_bounds 00:16:57.653 ************************************ 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100546 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:57.653 Process bdevio pid: 100546 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100546' 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100546 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100546 ']' 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.653 13:00:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:57.653 [2024-11-26 13:00:15.213575] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:57.653 [2024-11-26 13:00:15.213716] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100546 ] 00:16:57.914 [2024-11-26 13:00:15.377822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:57.914 [2024-11-26 13:00:15.426674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.914 [2024-11-26 13:00:15.426800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.914 [2024-11-26 13:00:15.426931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.485 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.485 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:16:58.485 13:00:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:58.485 I/O targets: 00:16:58.485 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:58.485 00:16:58.485 00:16:58.485 CUnit - A unit testing framework for C - Version 2.1-3 00:16:58.485 http://cunit.sourceforge.net/ 00:16:58.485 00:16:58.485 00:16:58.485 Suite: bdevio tests on: raid5f 00:16:58.485 Test: blockdev write read block ...passed 00:16:58.485 Test: blockdev write zeroes read block ...passed 00:16:58.485 Test: blockdev write zeroes read no split ...passed 00:16:58.745 Test: blockdev write zeroes read split ...passed 00:16:58.745 Test: blockdev write zeroes read split partial ...passed 00:16:58.745 Test: blockdev reset ...passed 00:16:58.745 Test: blockdev write read 8 blocks ...passed 00:16:58.745 Test: blockdev write read size > 128k ...passed 00:16:58.745 Test: blockdev write read invalid size ...passed 00:16:58.745 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:58.745 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:58.745 Test: blockdev write read max offset ...passed 00:16:58.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:58.745 Test: blockdev writev readv 8 blocks ...passed 00:16:58.745 Test: blockdev writev readv 30 x 1block ...passed 00:16:58.745 Test: blockdev writev readv block ...passed 00:16:58.745 Test: blockdev writev readv size > 128k ...passed 00:16:58.745 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:58.745 Test: blockdev comparev and writev ...passed 00:16:58.745 Test: blockdev nvme passthru rw ...passed 00:16:58.745 Test: blockdev nvme passthru vendor specific ...passed 00:16:58.745 Test: blockdev nvme admin passthru ...passed 00:16:58.745 Test: blockdev copy ...passed 00:16:58.745 00:16:58.745 Run Summary: Type Total Ran Passed Failed Inactive 00:16:58.745 suites 1 1 n/a 0 0 00:16:58.745 tests 23 23 23 0 0 00:16:58.745 asserts 130 130 130 0 n/a 00:16:58.745 00:16:58.745 Elapsed time = 0.308 seconds 00:16:58.745 0 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100546 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100546 ']' 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100546 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100546 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.745 killing process with pid 100546 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100546' 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100546 00:16:58.745 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100546 00:16:59.006 13:00:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:59.006 00:16:59.006 real 0m1.476s 00:16:59.006 user 0m3.478s 00:16:59.006 sys 0m0.379s 00:16:59.006 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:59.006 13:00:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:59.006 ************************************ 00:16:59.006 END TEST bdev_bounds 00:16:59.006 ************************************ 00:16:59.006 13:00:16 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:59.006 13:00:16 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:59.006 13:00:16 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:59.006 13:00:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:59.006 ************************************ 00:16:59.006 START TEST bdev_nbd 00:16:59.006 ************************************ 00:16:59.006 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:59.006 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:59.006 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:59.006 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100594 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:59.007 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100594 /var/tmp/spdk-nbd.sock 00:16:59.270 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100594 ']' 00:16:59.270 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:59.270 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:59.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:59.270 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:59.270 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:59.270 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:59.270 [2024-11-26 13:00:16.767243] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:59.270 [2024-11-26 13:00:16.767364] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.270 [2024-11-26 13:00:16.933162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.531 [2024-11-26 13:00:16.981715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:00.101 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:00.361 1+0 records in 00:17:00.361 1+0 records out 00:17:00.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039732 s, 10.3 MB/s 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:00.361 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:00.361 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:00.361 { 00:17:00.361 "nbd_device": "/dev/nbd0", 00:17:00.361 "bdev_name": "raid5f" 00:17:00.361 } 00:17:00.361 ]' 00:17:00.361 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:00.361 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:00.361 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:00.361 { 00:17:00.361 "nbd_device": "/dev/nbd0", 00:17:00.361 "bdev_name": "raid5f" 00:17:00.361 } 00:17:00.361 ]' 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:00.621 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:00.622 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:00.881 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:00.881 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:00.881 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:00.881 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:00.881 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:00.881 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:01.140 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:01.140 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:01.140 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:01.141 /dev/nbd0 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:01.141 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.401 1+0 records in 00:17:01.401 1+0 records out 00:17:01.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606011 s, 6.8 MB/s 00:17:01.401 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.401 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:01.401 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.401 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:01.401 13:00:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:01.401 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.401 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:01.401 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:01.401 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.401 13:00:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:01.401 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:01.401 { 00:17:01.401 "nbd_device": "/dev/nbd0", 00:17:01.401 "bdev_name": "raid5f" 00:17:01.401 } 00:17:01.401 ]' 00:17:01.401 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:01.401 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:01.401 { 00:17:01.401 "nbd_device": "/dev/nbd0", 00:17:01.401 "bdev_name": "raid5f" 00:17:01.401 } 00:17:01.401 ]' 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:01.663 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:01.663 256+0 records in 00:17:01.663 256+0 records out 00:17:01.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137384 s, 76.3 MB/s 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:01.664 256+0 records in 00:17:01.664 256+0 records out 00:17:01.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281376 s, 37.3 MB/s 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.664 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.923 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:02.182 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:02.182 malloc_lvol_verify 00:17:02.441 13:00:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:02.441 323ea9fe-b2e8-4f0c-9191-8d8c7107d37f 00:17:02.441 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:02.699 bd335157-cd55-4934-9aee-b1b0ba0f4ae8 00:17:02.699 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:02.959 /dev/nbd0 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:02.959 mke2fs 1.47.0 (5-Feb-2023) 00:17:02.959 Discarding device blocks: 0/4096 done 00:17:02.959 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:02.959 00:17:02.959 Allocating group tables: 0/1 done 00:17:02.959 Writing inode tables: 0/1 done 00:17:02.959 Creating journal (1024 blocks): done 00:17:02.959 Writing superblocks and filesystem accounting information: 0/1 done 00:17:02.959 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.959 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:03.219 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.219 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.219 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.219 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.219 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.219 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.219 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:03.219 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.219 13:00:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100594 00:17:03.220 13:00:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100594 ']' 00:17:03.220 13:00:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100594 00:17:03.220 13:00:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:17:03.220 13:00:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:03.220 13:00:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100594 00:17:03.220 13:00:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:03.220 13:00:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:03.220 killing process with pid 100594 00:17:03.220 13:00:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100594' 00:17:03.220 13:00:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100594 00:17:03.220 13:00:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100594 00:17:03.479 13:00:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:03.479 00:17:03.479 real 0m4.387s 00:17:03.479 user 0m6.330s 00:17:03.479 sys 0m1.292s 00:17:03.479 13:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.479 13:00:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:03.479 ************************************ 00:17:03.479 END TEST bdev_nbd 00:17:03.479 ************************************ 00:17:03.479 13:00:21 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:03.479 13:00:21 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:03.479 13:00:21 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:03.479 13:00:21 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:03.479 13:00:21 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:03.479 13:00:21 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.479 13:00:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:03.479 ************************************ 00:17:03.479 START TEST bdev_fio 00:17:03.479 ************************************ 00:17:03.479 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:17:03.479 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:03.479 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:03.479 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:03.479 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:03.479 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:03.479 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:03.479 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:03.479 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:03.479 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:03.480 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:03.480 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:03.480 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:03.480 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:03.480 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:03.480 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:03.480 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:03.480 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:03.480 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:03.741 ************************************ 00:17:03.741 START TEST bdev_fio_rw_verify 00:17:03.741 ************************************ 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:03.741 13:00:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:04.002 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:04.002 fio-3.35 00:17:04.002 Starting 1 thread 00:17:16.221 00:17:16.221 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100783: Tue Nov 26 13:00:32 2024 00:17:16.221 read: IOPS=12.0k, BW=47.0MiB/s (49.3MB/s)(470MiB/10001msec) 00:17:16.221 slat (usec): min=16, max=201, avg=19.36, stdev= 3.37 00:17:16.221 clat (usec): min=11, max=1024, avg=132.10, stdev=49.24 00:17:16.221 lat (usec): min=30, max=1225, avg=151.46, stdev=50.41 00:17:16.221 clat percentiles (usec): 00:17:16.221 | 50.000th=[ 135], 99.000th=[ 221], 99.900th=[ 408], 99.990th=[ 914], 00:17:16.221 | 99.999th=[ 955] 00:17:16.221 write: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(489MiB/9873msec); 0 zone resets 00:17:16.221 slat (usec): min=8, max=274, avg=17.32, stdev= 4.45 00:17:16.221 clat (usec): min=58, max=1726, avg=304.54, stdev=43.90 00:17:16.221 lat (usec): min=74, max=2000, avg=321.86, stdev=45.01 00:17:16.221 clat percentiles (usec): 00:17:16.221 | 50.000th=[ 310], 99.000th=[ 388], 99.900th=[ 586], 99.990th=[ 996], 00:17:16.221 | 99.999th=[ 1631] 00:17:16.221 bw ( KiB/s): min=46696, max=52208, per=98.36%, avg=49862.32, stdev=1332.41, samples=19 00:17:16.221 iops : min=11674, max=13052, avg=12465.58, stdev=333.10, samples=19 00:17:16.221 lat (usec) : 20=0.01%, 50=0.01%, 100=14.90%, 250=39.47%, 500=45.51% 00:17:16.221 lat (usec) : 750=0.07%, 1000=0.03% 00:17:16.221 lat (msec) : 2=0.01% 00:17:16.221 cpu : usr=98.68%, sys=0.54%, ctx=29, majf=0, minf=13010 00:17:16.221 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.221 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.221 issued rwts: total=120405,125124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.221 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:16.221 00:17:16.221 Run status group 0 (all jobs): 00:17:16.221 READ: bw=47.0MiB/s (49.3MB/s), 47.0MiB/s-47.0MiB/s (49.3MB/s-49.3MB/s), io=470MiB (493MB), run=10001-10001msec 00:17:16.221 WRITE: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=489MiB (513MB), run=9873-9873msec 00:17:16.221 ----------------------------------------------------- 00:17:16.221 Suppressions used: 00:17:16.221 count bytes template 00:17:16.221 1 7 /usr/src/fio/parse.c 00:17:16.221 968 92928 /usr/src/fio/iolog.c 00:17:16.221 1 8 libtcmalloc_minimal.so 00:17:16.221 1 904 libcrypto.so 00:17:16.221 ----------------------------------------------------- 00:17:16.221 00:17:16.221 00:17:16.221 real 0m11.247s 00:17:16.221 user 0m11.550s 00:17:16.221 sys 0m0.642s 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:16.221 ************************************ 00:17:16.221 END TEST bdev_fio_rw_verify 00:17:16.221 ************************************ 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "50a11816-386e-44fc-8e0c-bd43ff6c8f00"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "50a11816-386e-44fc-8e0c-bd43ff6c8f00",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "50a11816-386e-44fc-8e0c-bd43ff6c8f00",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "73f1da78-847f-4edd-be70-ef21d5640868",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "421afb10-ddf5-490e-87b5-8738d168cd62",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "85130b52-7bbf-4d24-9eb1-380cbc5b419e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:16.221 /home/vagrant/spdk_repo/spdk 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:16.221 00:17:16.221 real 0m11.538s 00:17:16.221 user 0m11.661s 00:17:16.221 sys 0m0.801s 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.221 13:00:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:16.221 ************************************ 00:17:16.221 END TEST bdev_fio 00:17:16.222 ************************************ 00:17:16.222 13:00:32 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:16.222 13:00:32 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:16.222 13:00:32 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:16.222 13:00:32 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.222 13:00:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:16.222 ************************************ 00:17:16.222 START TEST bdev_verify 00:17:16.222 ************************************ 00:17:16.222 13:00:32 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:16.222 [2024-11-26 13:00:32.835995] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:16.222 [2024-11-26 13:00:32.836149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100935 ] 00:17:16.222 [2024-11-26 13:00:33.003015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:16.222 [2024-11-26 13:00:33.055670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.222 [2024-11-26 13:00:33.055762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.222 Running I/O for 5 seconds... 00:17:17.731 11070.00 IOPS, 43.24 MiB/s [2024-11-26T13:00:36.366Z] 11185.00 IOPS, 43.69 MiB/s [2024-11-26T13:00:37.371Z] 11220.67 IOPS, 43.83 MiB/s [2024-11-26T13:00:38.309Z] 11212.25 IOPS, 43.80 MiB/s [2024-11-26T13:00:38.309Z] 11208.80 IOPS, 43.78 MiB/s 00:17:20.625 Latency(us) 00:17:20.625 [2024-11-26T13:00:38.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.625 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:20.625 Verification LBA range: start 0x0 length 0x2000 00:17:20.625 raid5f : 5.02 4507.53 17.61 0.00 0.00 42705.66 150.25 29992.02 00:17:20.625 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:20.625 Verification LBA range: start 0x2000 length 0x2000 00:17:20.625 raid5f : 5.02 6711.16 26.22 0.00 0.00 28668.23 197.65 22093.36 00:17:20.625 [2024-11-26T13:00:38.309Z] =================================================================================================================== 00:17:20.625 [2024-11-26T13:00:38.309Z] Total : 11218.69 43.82 0.00 0.00 34306.39 150.25 29992.02 00:17:20.885 00:17:20.885 real 0m5.790s 00:17:20.885 user 0m10.723s 00:17:20.885 sys 0m0.252s 00:17:20.885 13:00:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.885 13:00:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:20.885 ************************************ 00:17:20.885 END TEST bdev_verify 00:17:20.885 ************************************ 00:17:21.145 13:00:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:21.145 13:00:38 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:21.145 13:00:38 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:21.145 13:00:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:21.145 ************************************ 00:17:21.145 START TEST bdev_verify_big_io 00:17:21.145 ************************************ 00:17:21.145 13:00:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:21.145 [2024-11-26 13:00:38.691386] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:21.145 [2024-11-26 13:00:38.691548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101017 ] 00:17:21.405 [2024-11-26 13:00:38.856273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:21.405 [2024-11-26 13:00:38.908668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.405 [2024-11-26 13:00:38.908697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.665 Running I/O for 5 seconds... 00:17:23.986 633.00 IOPS, 39.56 MiB/s [2024-11-26T13:00:42.611Z] 823.00 IOPS, 51.44 MiB/s [2024-11-26T13:00:43.550Z] 824.33 IOPS, 51.52 MiB/s [2024-11-26T13:00:44.491Z] 840.50 IOPS, 52.53 MiB/s [2024-11-26T13:00:44.491Z] 863.20 IOPS, 53.95 MiB/s 00:17:26.807 Latency(us) 00:17:26.807 [2024-11-26T13:00:44.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.807 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:26.807 Verification LBA range: start 0x0 length 0x200 00:17:26.807 raid5f : 5.28 361.12 22.57 0.00 0.00 8780874.44 118.94 369977.91 00:17:26.807 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:26.807 Verification LBA range: start 0x200 length 0x200 00:17:26.807 raid5f : 5.20 464.28 29.02 0.00 0.00 6903434.44 279.03 302209.68 00:17:26.807 [2024-11-26T13:00:44.491Z] =================================================================================================================== 00:17:26.807 [2024-11-26T13:00:44.491Z] Total : 825.40 51.59 0.00 0.00 7731716.79 118.94 369977.91 00:17:27.068 00:17:27.068 real 0m6.039s 00:17:27.068 user 0m11.211s 00:17:27.068 sys 0m0.263s 00:17:27.068 13:00:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.068 13:00:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.068 ************************************ 00:17:27.068 END TEST bdev_verify_big_io 00:17:27.068 ************************************ 00:17:27.068 13:00:44 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:27.068 13:00:44 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:27.068 13:00:44 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.068 13:00:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.068 ************************************ 00:17:27.068 START TEST bdev_write_zeroes 00:17:27.068 ************************************ 00:17:27.068 13:00:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:27.328 [2024-11-26 13:00:44.804695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:27.328 [2024-11-26 13:00:44.804820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101099 ] 00:17:27.328 [2024-11-26 13:00:44.965382] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.589 [2024-11-26 13:00:45.016452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.589 Running I/O for 1 seconds... 00:17:28.972 30087.00 IOPS, 117.53 MiB/s 00:17:28.972 Latency(us) 00:17:28.972 [2024-11-26T13:00:46.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.972 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:28.972 raid5f : 1.01 30066.18 117.45 0.00 0.00 4244.80 1366.53 6095.71 00:17:28.972 [2024-11-26T13:00:46.656Z] =================================================================================================================== 00:17:28.972 [2024-11-26T13:00:46.656Z] Total : 30066.18 117.45 0.00 0.00 4244.80 1366.53 6095.71 00:17:28.972 00:17:28.972 real 0m1.757s 00:17:28.972 user 0m1.397s 00:17:28.972 sys 0m0.238s 00:17:28.972 13:00:46 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.972 13:00:46 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:28.972 ************************************ 00:17:28.972 END TEST bdev_write_zeroes 00:17:28.972 ************************************ 00:17:28.972 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:28.972 13:00:46 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:28.972 13:00:46 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.972 13:00:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:28.972 ************************************ 00:17:28.972 START TEST bdev_json_nonenclosed 00:17:28.972 ************************************ 00:17:28.972 13:00:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:28.972 [2024-11-26 13:00:46.626517] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:28.972 [2024-11-26 13:00:46.626649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101135 ] 00:17:29.233 [2024-11-26 13:00:46.786444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.233 [2024-11-26 13:00:46.838959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.233 [2024-11-26 13:00:46.839079] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:29.233 [2024-11-26 13:00:46.839104] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:29.233 [2024-11-26 13:00:46.839116] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:29.493 00:17:29.493 real 0m0.408s 00:17:29.493 user 0m0.171s 00:17:29.493 sys 0m0.133s 00:17:29.493 13:00:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.493 13:00:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:29.493 ************************************ 00:17:29.493 END TEST bdev_json_nonenclosed 00:17:29.493 ************************************ 00:17:29.493 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:29.493 13:00:47 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:29.493 13:00:47 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:29.493 13:00:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:29.493 ************************************ 00:17:29.493 START TEST bdev_json_nonarray 00:17:29.493 ************************************ 00:17:29.493 13:00:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:29.493 [2024-11-26 13:00:47.118631] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:29.493 [2024-11-26 13:00:47.118758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101162 ] 00:17:29.753 [2024-11-26 13:00:47.282245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.753 [2024-11-26 13:00:47.336995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.753 [2024-11-26 13:00:47.337136] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:29.753 [2024-11-26 13:00:47.337170] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:29.754 [2024-11-26 13:00:47.337184] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:30.015 00:17:30.015 real 0m0.428s 00:17:30.015 user 0m0.183s 00:17:30.015 sys 0m0.140s 00:17:30.015 13:00:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.015 13:00:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:30.015 ************************************ 00:17:30.015 END TEST bdev_json_nonarray 00:17:30.015 ************************************ 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:30.015 13:00:47 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:30.015 00:17:30.015 real 0m35.083s 00:17:30.015 user 0m47.462s 00:17:30.015 sys 0m4.825s 00:17:30.015 13:00:47 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.015 13:00:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:30.015 ************************************ 00:17:30.015 END TEST blockdev_raid5f 00:17:30.015 ************************************ 00:17:30.015 13:00:47 -- spdk/autotest.sh@194 -- # uname -s 00:17:30.015 13:00:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:30.015 13:00:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:30.015 13:00:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:30.015 13:00:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:30.015 13:00:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:30.015 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:30.015 13:00:47 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:30.015 13:00:47 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:17:30.015 13:00:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:30.015 13:00:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:30.015 13:00:47 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:17:30.015 13:00:47 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:17:30.015 13:00:47 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:17:30.015 13:00:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:30.015 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:30.015 13:00:47 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:17:30.015 13:00:47 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:17:30.015 13:00:47 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:17:30.015 13:00:47 -- common/autotest_common.sh@10 -- # set +x 00:17:32.559 INFO: APP EXITING 00:17:32.559 INFO: killing all VMs 00:17:32.559 INFO: killing vhost app 00:17:32.559 INFO: EXIT DONE 00:17:32.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:33.079 Waiting for block devices as requested 00:17:33.079 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:33.079 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:34.022 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:34.284 Cleaning 00:17:34.284 Removing: /var/run/dpdk/spdk0/config 00:17:34.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:34.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:34.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:34.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:34.284 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:34.284 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:34.284 Removing: /dev/shm/spdk_tgt_trace.pid69239 00:17:34.284 Removing: /var/run/dpdk/spdk0 00:17:34.284 Removing: /var/run/dpdk/spdk_pid100223 00:17:34.284 Removing: /var/run/dpdk/spdk_pid100481 00:17:34.284 Removing: /var/run/dpdk/spdk_pid100516 00:17:34.284 Removing: /var/run/dpdk/spdk_pid100546 00:17:34.284 Removing: /var/run/dpdk/spdk_pid100773 00:17:34.284 Removing: /var/run/dpdk/spdk_pid100935 00:17:34.284 Removing: /var/run/dpdk/spdk_pid101017 00:17:34.284 Removing: /var/run/dpdk/spdk_pid101099 00:17:34.284 Removing: /var/run/dpdk/spdk_pid101135 00:17:34.284 Removing: /var/run/dpdk/spdk_pid101162 00:17:34.284 Removing: /var/run/dpdk/spdk_pid69061 00:17:34.284 Removing: /var/run/dpdk/spdk_pid69239 00:17:34.284 Removing: /var/run/dpdk/spdk_pid69446 00:17:34.284 Removing: /var/run/dpdk/spdk_pid69534 00:17:34.284 Removing: /var/run/dpdk/spdk_pid69568 00:17:34.284 Removing: /var/run/dpdk/spdk_pid69685 00:17:34.284 Removing: /var/run/dpdk/spdk_pid69703 00:17:34.284 Removing: /var/run/dpdk/spdk_pid69891 00:17:34.284 Removing: /var/run/dpdk/spdk_pid69971 00:17:34.284 Removing: /var/run/dpdk/spdk_pid70056 00:17:34.284 Removing: /var/run/dpdk/spdk_pid70156 00:17:34.284 Removing: /var/run/dpdk/spdk_pid70242 00:17:34.284 Removing: /var/run/dpdk/spdk_pid70277 00:17:34.284 Removing: /var/run/dpdk/spdk_pid70318 00:17:34.284 Removing: /var/run/dpdk/spdk_pid70389 00:17:34.284 Removing: /var/run/dpdk/spdk_pid70500 00:17:34.284 Removing: /var/run/dpdk/spdk_pid70933 00:17:34.284 Removing: /var/run/dpdk/spdk_pid70987 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71039 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71055 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71126 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71142 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71222 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71238 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71288 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71301 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71353 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71371 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71509 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71546 00:17:34.284 Removing: /var/run/dpdk/spdk_pid71629 00:17:34.284 Removing: /var/run/dpdk/spdk_pid72817 00:17:34.544 Removing: /var/run/dpdk/spdk_pid73018 00:17:34.544 Removing: /var/run/dpdk/spdk_pid73152 00:17:34.544 Removing: /var/run/dpdk/spdk_pid73757 00:17:34.544 Removing: /var/run/dpdk/spdk_pid73957 00:17:34.544 Removing: /var/run/dpdk/spdk_pid74091 00:17:34.544 Removing: /var/run/dpdk/spdk_pid74691 00:17:34.544 Removing: /var/run/dpdk/spdk_pid75009 00:17:34.544 Removing: /var/run/dpdk/spdk_pid75139 00:17:34.544 Removing: /var/run/dpdk/spdk_pid76469 00:17:34.544 Removing: /var/run/dpdk/spdk_pid76706 00:17:34.544 Removing: /var/run/dpdk/spdk_pid76839 00:17:34.544 Removing: /var/run/dpdk/spdk_pid78170 00:17:34.544 Removing: /var/run/dpdk/spdk_pid78412 00:17:34.544 Removing: /var/run/dpdk/spdk_pid78547 00:17:34.544 Removing: /var/run/dpdk/spdk_pid79888 00:17:34.544 Removing: /var/run/dpdk/spdk_pid80318 00:17:34.544 Removing: /var/run/dpdk/spdk_pid80447 00:17:34.544 Removing: /var/run/dpdk/spdk_pid81877 00:17:34.544 Removing: /var/run/dpdk/spdk_pid82125 00:17:34.544 Removing: /var/run/dpdk/spdk_pid82254 00:17:34.544 Removing: /var/run/dpdk/spdk_pid83680 00:17:34.544 Removing: /var/run/dpdk/spdk_pid83928 00:17:34.544 Removing: /var/run/dpdk/spdk_pid84062 00:17:34.544 Removing: /var/run/dpdk/spdk_pid85488 00:17:34.544 Removing: /var/run/dpdk/spdk_pid85964 00:17:34.544 Removing: /var/run/dpdk/spdk_pid86093 00:17:34.544 Removing: /var/run/dpdk/spdk_pid86220 00:17:34.544 Removing: /var/run/dpdk/spdk_pid86615 00:17:34.544 Removing: /var/run/dpdk/spdk_pid87326 00:17:34.544 Removing: /var/run/dpdk/spdk_pid87708 00:17:34.544 Removing: /var/run/dpdk/spdk_pid88380 00:17:34.544 Removing: /var/run/dpdk/spdk_pid88804 00:17:34.544 Removing: /var/run/dpdk/spdk_pid89544 00:17:34.544 Removing: /var/run/dpdk/spdk_pid89933 00:17:34.544 Removing: /var/run/dpdk/spdk_pid91863 00:17:34.544 Removing: /var/run/dpdk/spdk_pid92296 00:17:34.544 Removing: /var/run/dpdk/spdk_pid92725 00:17:34.544 Removing: /var/run/dpdk/spdk_pid94782 00:17:34.544 Removing: /var/run/dpdk/spdk_pid95252 00:17:34.544 Removing: /var/run/dpdk/spdk_pid95738 00:17:34.544 Removing: /var/run/dpdk/spdk_pid96773 00:17:34.544 Removing: /var/run/dpdk/spdk_pid97090 00:17:34.544 Removing: /var/run/dpdk/spdk_pid98005 00:17:34.544 Removing: /var/run/dpdk/spdk_pid98322 00:17:34.544 Removing: /var/run/dpdk/spdk_pid99244 00:17:34.544 Removing: /var/run/dpdk/spdk_pid99556 00:17:34.544 Clean 00:17:34.805 13:00:52 -- common/autotest_common.sh@1451 -- # return 0 00:17:34.805 13:00:52 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:34.805 13:00:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:34.805 13:00:52 -- common/autotest_common.sh@10 -- # set +x 00:17:34.805 13:00:52 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:34.805 13:00:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:34.805 13:00:52 -- common/autotest_common.sh@10 -- # set +x 00:17:34.805 13:00:52 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:34.805 13:00:52 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:34.805 13:00:52 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:34.805 13:00:52 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:34.805 13:00:52 -- spdk/autotest.sh@394 -- # hostname 00:17:34.805 13:00:52 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:35.065 geninfo: WARNING: invalid characters removed from testname! 00:18:01.666 13:01:16 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:01.666 13:01:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:04.207 13:01:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:06.750 13:01:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:08.659 13:01:25 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:10.568 13:01:27 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:12.478 13:01:29 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:12.478 13:01:30 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:18:12.478 13:01:30 -- common/autotest_common.sh@1681 -- $ lcov --version 00:18:12.478 13:01:30 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:18:12.478 13:01:30 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:18:12.478 13:01:30 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:18:12.478 13:01:30 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:12.478 13:01:30 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:12.478 13:01:30 -- scripts/common.sh@336 -- $ IFS=.-: 00:18:12.478 13:01:30 -- scripts/common.sh@336 -- $ read -ra ver1 00:18:12.478 13:01:30 -- scripts/common.sh@337 -- $ IFS=.-: 00:18:12.478 13:01:30 -- scripts/common.sh@337 -- $ read -ra ver2 00:18:12.478 13:01:30 -- scripts/common.sh@338 -- $ local 'op=<' 00:18:12.478 13:01:30 -- scripts/common.sh@340 -- $ ver1_l=2 00:18:12.478 13:01:30 -- scripts/common.sh@341 -- $ ver2_l=1 00:18:12.478 13:01:30 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:12.478 13:01:30 -- scripts/common.sh@344 -- $ case "$op" in 00:18:12.478 13:01:30 -- scripts/common.sh@345 -- $ : 1 00:18:12.478 13:01:30 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:12.478 13:01:30 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.478 13:01:30 -- scripts/common.sh@365 -- $ decimal 1 00:18:12.478 13:01:30 -- scripts/common.sh@353 -- $ local d=1 00:18:12.478 13:01:30 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:18:12.478 13:01:30 -- scripts/common.sh@355 -- $ echo 1 00:18:12.478 13:01:30 -- scripts/common.sh@365 -- $ ver1[v]=1 00:18:12.478 13:01:30 -- scripts/common.sh@366 -- $ decimal 2 00:18:12.478 13:01:30 -- scripts/common.sh@353 -- $ local d=2 00:18:12.478 13:01:30 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:18:12.478 13:01:30 -- scripts/common.sh@355 -- $ echo 2 00:18:12.478 13:01:30 -- scripts/common.sh@366 -- $ ver2[v]=2 00:18:12.478 13:01:30 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:12.478 13:01:30 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:18:12.478 13:01:30 -- scripts/common.sh@368 -- $ return 0 00:18:12.478 13:01:30 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.478 13:01:30 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:18:12.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.478 --rc genhtml_branch_coverage=1 00:18:12.478 --rc genhtml_function_coverage=1 00:18:12.478 --rc genhtml_legend=1 00:18:12.478 --rc geninfo_all_blocks=1 00:18:12.478 --rc geninfo_unexecuted_blocks=1 00:18:12.478 00:18:12.478 ' 00:18:12.478 13:01:30 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:18:12.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.478 --rc genhtml_branch_coverage=1 00:18:12.478 --rc genhtml_function_coverage=1 00:18:12.478 --rc genhtml_legend=1 00:18:12.478 --rc geninfo_all_blocks=1 00:18:12.478 --rc geninfo_unexecuted_blocks=1 00:18:12.478 00:18:12.478 ' 00:18:12.478 13:01:30 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:18:12.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.478 --rc genhtml_branch_coverage=1 00:18:12.478 --rc genhtml_function_coverage=1 00:18:12.478 --rc genhtml_legend=1 00:18:12.478 --rc geninfo_all_blocks=1 00:18:12.478 --rc geninfo_unexecuted_blocks=1 00:18:12.478 00:18:12.478 ' 00:18:12.478 13:01:30 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:18:12.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.478 --rc genhtml_branch_coverage=1 00:18:12.478 --rc genhtml_function_coverage=1 00:18:12.478 --rc genhtml_legend=1 00:18:12.478 --rc geninfo_all_blocks=1 00:18:12.478 --rc geninfo_unexecuted_blocks=1 00:18:12.478 00:18:12.478 ' 00:18:12.478 13:01:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:12.478 13:01:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:18:12.478 13:01:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:12.478 13:01:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.478 13:01:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.478 13:01:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.478 13:01:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.478 13:01:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.478 13:01:30 -- paths/export.sh@5 -- $ export PATH 00:18:12.478 13:01:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.478 13:01:30 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:12.478 13:01:30 -- common/autobuild_common.sh@479 -- $ date +%s 00:18:12.478 13:01:30 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732626090.XXXXXX 00:18:12.478 13:01:30 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732626090.eKpdS4 00:18:12.478 13:01:30 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:18:12.478 13:01:30 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:18:12.478 13:01:30 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:18:12.478 13:01:30 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:18:12.478 13:01:30 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:12.478 13:01:30 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:12.478 13:01:30 -- common/autobuild_common.sh@495 -- $ get_config_params 00:18:12.478 13:01:30 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:18:12.478 13:01:30 -- common/autotest_common.sh@10 -- $ set +x 00:18:12.739 13:01:30 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:18:12.739 13:01:30 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:18:12.739 13:01:30 -- pm/common@17 -- $ local monitor 00:18:12.739 13:01:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:12.739 13:01:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:12.739 13:01:30 -- pm/common@25 -- $ sleep 1 00:18:12.739 13:01:30 -- pm/common@21 -- $ date +%s 00:18:12.739 13:01:30 -- pm/common@21 -- $ date +%s 00:18:12.739 13:01:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732626090 00:18:12.739 13:01:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732626090 00:18:12.739 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732626090_collect-cpu-load.pm.log 00:18:12.739 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732626090_collect-vmstat.pm.log 00:18:13.680 13:01:31 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:18:13.680 13:01:31 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:18:13.680 13:01:31 -- spdk/autopackage.sh@14 -- $ timing_finish 00:18:13.680 13:01:31 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:13.680 13:01:31 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:13.680 13:01:31 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:13.680 13:01:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:18:13.680 13:01:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:18:13.680 13:01:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:18:13.680 13:01:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:13.680 13:01:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:18:13.680 13:01:31 -- pm/common@44 -- $ pid=102709 00:18:13.680 13:01:31 -- pm/common@50 -- $ kill -TERM 102709 00:18:13.680 13:01:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:13.680 13:01:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:18:13.680 13:01:31 -- pm/common@44 -- $ pid=102711 00:18:13.680 13:01:31 -- pm/common@50 -- $ kill -TERM 102711 00:18:13.680 + [[ -n 6168 ]] 00:18:13.680 + sudo kill 6168 00:18:13.690 [Pipeline] } 00:18:13.707 [Pipeline] // timeout 00:18:13.713 [Pipeline] } 00:18:13.727 [Pipeline] // stage 00:18:13.733 [Pipeline] } 00:18:13.747 [Pipeline] // catchError 00:18:13.756 [Pipeline] stage 00:18:13.758 [Pipeline] { (Stop VM) 00:18:13.770 [Pipeline] sh 00:18:14.055 + vagrant halt 00:18:16.596 ==> default: Halting domain... 00:18:24.744 [Pipeline] sh 00:18:25.029 + vagrant destroy -f 00:18:27.571 ==> default: Removing domain... 00:18:27.585 [Pipeline] sh 00:18:27.870 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:27.880 [Pipeline] } 00:18:27.891 [Pipeline] // stage 00:18:27.895 [Pipeline] } 00:18:27.905 [Pipeline] // dir 00:18:27.910 [Pipeline] } 00:18:27.920 [Pipeline] // wrap 00:18:27.925 [Pipeline] } 00:18:27.935 [Pipeline] // catchError 00:18:27.942 [Pipeline] stage 00:18:27.944 [Pipeline] { (Epilogue) 00:18:27.954 [Pipeline] sh 00:18:28.236 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:32.483 [Pipeline] catchError 00:18:32.485 [Pipeline] { 00:18:32.501 [Pipeline] sh 00:18:32.823 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:32.823 Artifacts sizes are good 00:18:32.833 [Pipeline] } 00:18:32.850 [Pipeline] // catchError 00:18:32.863 [Pipeline] archiveArtifacts 00:18:32.870 Archiving artifacts 00:18:32.977 [Pipeline] cleanWs 00:18:32.990 [WS-CLEANUP] Deleting project workspace... 00:18:32.990 [WS-CLEANUP] Deferred wipeout is used... 00:18:32.998 [WS-CLEANUP] done 00:18:33.000 [Pipeline] } 00:18:33.016 [Pipeline] // stage 00:18:33.022 [Pipeline] } 00:18:33.039 [Pipeline] // node 00:18:33.044 [Pipeline] End of Pipeline 00:18:33.084 Finished: SUCCESS